From bugzilla at redhat.com Wed May 1 12:49:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 01 May 2019 12:49:15 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22652 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 1 12:49:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 01 May 2019 12:49:16 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #643 from Worker Ant --- REVIEW: https://review.gluster.org/22652 (glusterd-utils.c: reduce work in glusterd_add_volume_to_dict()) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 1 12:56:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 01 May 2019 12:56:35 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com --- Comment #19 from Amar Tumballi --- Yes please, you can re-enable write-behind if you have upgraded to glusterfs-5.5 or glusterfs-6.x releases. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 1 12:57:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 01 May 2019 12:57:02 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 2 03:24:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 02 May 2019 03:24:38 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #644 from Worker Ant --- REVIEW: https://review.gluster.org/22580 (tests: Add changelog snapshot testcase) merged (#5) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 2 04:01:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 02 May 2019 04:01:29 +0000 Subject: [Bugs] [Bug 1672480] Bugs Test Module tests failing on s390x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672480 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com Flags| |needinfo?(atumball at redhat.c | |om) --- Comment #63 from Nithya Balachandran --- (In reply to abhays from comment #60) > Hi @Nithya, > > Any updates on this issue? > Seems that the same test cases are failing in the Glusterfs v6.1 with > additional ones:- > ./tests/bugs/replicate/bug-1655854-support-dist-to-rep3-arb-conversion.t > ./tests/features/fuse-lru-limit.t > > And one query we have with respect to these failures whether they affect the > main functionality of Glusterfs or they can be ignored for now? > Please let us know. > > > Also, s390x systems have been added on the gluster-ci. Any updates regards > to that? I am no longer working on this. @Amar, please assign this to the appropriate person. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 2 04:53:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 02 May 2019 04:53:10 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #33 from Worker Ant --- REVIEW: https://review.gluster.org/22630 (tests: add .t files to increase cli code coverage) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 2 06:00:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 02 May 2019 06:00:52 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #34 from Worker Ant --- REVIEW: https://review.gluster.org/22631 (tests/cli: add .t file to increase line coverage in cli) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 2 07:10:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 02 May 2019 07:10:11 +0000 Subject: [Bugs] [Bug 1705351] New: glusterfsd crash after days of running Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705351 Bug ID: 1705351 Summary: glusterfsd crash after days of running Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: HDFS Severity: urgent Assignee: bugs at gluster.org Reporter: waza123 at inbox.lv CC: bugs at gluster.org Target Milestone: --- Classification: Community One of brick just crashed glusterfsd and it cant be started again What can I do to start it again ? crash dump gdb: Program terminated with signal SIGSEGV, Segmentation fault. #0 up_lk (frame=0x7fea88193f30, this=0x7feb3401c770, fd=0x0, cmd=6, flock=0x7feb0d174d40, xdata=0x0) at upcall.c:239 239 local = upcall_local_init (frame, this, NULL, NULL, fd->inode, NULL); [Current thread is 1 (Thread 0x7feb0031e700 (LWP 12319))] (gdb) bt #0 up_lk (frame=0x7fea88193f30, this=0x7feb3401c770, fd=0x0, cmd=6, flock=0x7feb0d174d40, xdata=0x0) at upcall.c:239 #1 0x00007feb3e1cf65d in default_lk_resume (frame=0x7feb0d174ae0, this=0x7feb3401e060, fd=0x0, cmd=6, lock=0x7feb0d174d40, xdata=0x0) at defaults.c:1833 #2 0x00007feb3e166f35 in call_resume (stub=0x7feb0d174bf0) at call-stub.c:2508 #3 0x00007feb31e00d74 in iot_worker (data=0x7feb34058480) at io-threads.c:222 #4 0x00007feb3d8ca6ba in start_thread (arg=0x7feb0031e700) at pthread_create.c:333 #5 0x00007feb3d60041d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 (gdb) bt full #0 up_lk (frame=0x7fea88193f30, this=0x7feb3401c770, fd=0x0, cmd=6, flock=0x7feb0d174d40, xdata=0x0) at upcall.c:239 op_errno = -1 local = 0x0 __FUNCTION__ = "up_lk" #1 0x00007feb3e1cf65d in default_lk_resume (frame=0x7feb0d174ae0, this=0x7feb3401e060, fd=0x0, cmd=6, lock=0x7feb0d174d40, xdata=0x0) at defaults.c:1833 _new = 0x7fea88193f30 old_THIS = 0x7feb3401e060 tmp_cbk = 0x7feb3e1bafa0 __FUNCTION__ = "default_lk_resume" #2 0x00007feb3e166f35 in call_resume (stub=0x7feb0d174bf0) at call-stub.c:2508 old_THIS = 0x7feb3401e060 __FUNCTION__ = "call_resume" #3 0x00007feb31e00d74 in iot_worker (data=0x7feb34058480) at io-threads.c:222 conf = 0x7feb34058480 this = stub = 0x7feb0d174bf0 sleep_till = {tv_sec = 1556637893, tv_nsec = 0} ret = pri = 1 bye = _gf_false __FUNCTION__ = "iot_worker" #4 0x00007feb3d8ca6ba in start_thread (arg=0x7feb0031e700) at pthread_create.c:333 __res = pd = 0x7feb0031e700 now = unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140647297312512, 5756482990956014801, 0, 140648089937359, 140647297313216, 140648166818944, -5749651260269466415, -5749590536105693999}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = pagesize_m1 = sp = freesize = __PRETTY_FUNCTION__ = "start_thread" #5 0x00007feb3d60041d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 No locals. (gdb) # config # gluster volume info Volume Name: hadoop_volume Type: Disperse Volume ID: f13b43b0-ff9e-429b-81ed-15c92cdd1181 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: hdd1:/hadoop Brick2: hdd2:/hadoop Brick3: hdd3:/hadoop Options Reconfigured: cluster.disperse-self-heal-daemon: enable server.statedump-path: /tmp performance.client-io-threads: on server.event-threads: 16 client.event-threads: 16 cluster.lookup-optimize: on performance.parallel-readdir: on transport.address-family: inet nfs.disable: on features.cache-invalidation: on features.cache-invalidation-timeout: 600 performance.stat-prefetch: on performance.cache-invalidation: on performance.md-cache-timeout: 600 network.inode-lru-limit: 500000 features.lock-heal: on # status # gluster volume status Status of volume: hadoop_volume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick hdd1:/hadoop 49152 0 Y 5085 Brick hdd2:/hadoop 49152 0 Y 4044 Self-heal Daemon on localhost N/A N/A Y 2383 Self-heal Daemon on serv3 N/A N/A Y 2423 Self-heal Daemon on serv2 N/A N/A Y 3429 Self-heal Daemon on hdd2 N/A N/A Y 4035 Self-heal Daemon on hdd1 N/A N/A Y 5076 Task Status of Volume hadoop_volume ------------------------------------------------------------------------------ There are no active volume tasks -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 2 12:13:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 02 May 2019 12:13:45 +0000 Subject: [Bugs] [Bug 1705351] glusterfsd crash after days of running In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705351 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |jahernan at redhat.com --- Comment #1 from Xavi Hernandez --- Can you upload the coredump to be able to analyze it ? I will also need to know the exact version of gluster and the operating system you are using. To restart the crashed brick, the following command should help: # gluster volume start hadoop_volume force -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 2 12:16:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 02 May 2019 12:16:21 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1616 from Worker Ant --- REVIEW: https://review.gluster.org/22619 (glusterd: Fix coverity defects & put coverity annotations) merged (#9) on master by Atin Mukherjee -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Thu May 2 17:10:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 02 May 2019 17:10:34 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 --- Comment #20 from Xavi Hernandez --- >From my debugging I think the issue is related to a missing fd_ref() when ob_open_behind() is used. This could potentially cause a race when the same fd is being unref'ed (refcount becoming 0) and ref'ed at the same time to handle some open_and_resume() requests. I have not yet identified the exact sequence of operations that cause the problem though. Knowing that the problem really comes from here, I'll investigate further. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 3 04:37:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 03 May 2019 04:37:03 +0000 Subject: [Bugs] [Bug 1705865] New: VM stuck in a shutdown because of a pending fuse request Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705865 Bug ID: 1705865 Summary: VM stuck in a shutdown because of a pending fuse request Product: GlusterFS Version: mainline OS: Linux Status: NEW Component: write-behind Severity: medium Priority: medium Assignee: bugs at gluster.org Reporter: rgowdapp at redhat.com CC: nravinas at redhat.com, rgowdapp at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com Depends On: 1702686 Target Milestone: --- Group: redhat Classification: Community -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri May 3 04:38:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 03 May 2019 04:38:17 +0000 Subject: [Bugs] [Bug 1705865] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705865 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED --- Comment #1 from Raghavendra G --- VM fails to shutdown, getting stuck in 'Powering down' status. This is because its 'qemu-kvm' process gets in a zombie/defunct state: more ps-Ll.txt F S UID PID PPID LWP C PRI NI ADDR SZ WCHAN TTY TIME CMD 6 Z 107 20631 1 20631 0 80 0 - 0 do_exi ? 8:45 [qemu-kvm] 3 D 107 20631 1 20635 0 80 0 - 2386845 fuse_r ? 1:12 [qemu-kvm] The customer has collected a crash dump of the affected VM and also statedumps from all the glusterfs process running in this machine when this problem is present. Thread ID 20635 is the one of interest: crash> bt 20635 PID: 20635 TASK: ffff9ed3926eb0c0 CPU: 7 COMMAND: "IO iothread1" #0 [ffff9ec8e351fa28] __schedule at ffffffff91967747 #1 [ffff9ec8e351fab0] schedule at ffffffff91967c49 #2 [ffff9ec8e351fac0] __fuse_request_send at ffffffffc09d24e5 [fuse] #3 [ffff9ec8e351fb30] fuse_request_send at ffffffffc09d26e2 [fuse] #4 [ffff9ec8e351fb40] fuse_send_write at ffffffffc09dbc76 [fuse] #5 [ffff9ec8e351fb70] fuse_direct_io at ffffffffc09dc0d6 [fuse] #6 [ffff9ec8e351fc58] __fuse_direct_write at ffffffffc09dc562 [fuse] #7 [ffff9ec8e351fca8] fuse_direct_IO at ffffffffc09dd3ca [fuse] #8 [ffff9ec8e351fd70] generic_file_direct_write at ffffffff913b8663 #9 [ffff9ec8e351fdc8] fuse_file_aio_write at ffffffffc09ddbd5 [fuse] #10 [ffff9ec8e351fe60] do_io_submit at ffffffff91497a73 #11 [ffff9ec8e351ff40] sys_io_submit at ffffffff91497f40 #12 [ffff9ec8e351ff50] tracesys at ffffffff9197505b (via system_call) RIP: 00007f9ff0758697 RSP: 00007f9db86814b8 RFLAGS: 00000246 RAX: ffffffffffffffda RBX: 0000000000000001 RCX: ffffffffffffffff RDX: 00007f9db86814d0 RSI: 0000000000000001 RDI: 00007f9ff268e000 RBP: 0000000000000080 R8: 0000000000000080 R9: 000000000000006a R10: 0000000000000078 R11: 0000000000000246 R12: 00007f9db86814c0 R13: 0000560264b9b518 R14: 0000560264b9b4f0 R15: 00007f9db8681bb0 ORIG_RAX: 00000000000000d1 CS: 0033 SS: 002b >From the core, this is the file the above process is writing to: crash> files -d 0xffff9ec8e8f9f740 DENTRY INODE SUPERBLK TYPE PATH ffff9ec8e8f9f740 ffff9ed39e705700 ffff9ee009adc000 REG /rhev/data-center/mnt/glusterSD/172.16.20.21:_vmstore2/e5dd645f-88bb-491c-9145-38fa229cbc4d/images/8e84c1ed-48ba-4b82-9882-c96e6f260bab/29bba0a1-6c7b-4358-9ef2-f8080405778d So in this case we're accessing the vmstore2 volume. This is the glusterfs process: root 4863 0.0 0.0 1909580 49316 ? S bt 4863 PID: 4863 TASK: ffff9edfa9ff9040 CPU: 11 COMMAND: "glusterfs" #0 [ffff9ed3a332fc28] __schedule at ffffffff91967747 #1 [ffff9ed3a332fcb0] schedule at ffffffff91967c49 #2 [ffff9ed3a332fcc0] futex_wait_queue_me at ffffffff9130cf76 #3 [ffff9ed3a332fd00] futex_wait at ffffffff9130dc5b #4 [ffff9ed3a332fe48] do_futex at ffffffff9130f9a6 #5 [ffff9ed3a332fed8] sys_futex at ffffffff9130fec0 #6 [ffff9ed3a332ff50] system_call_fastpath at ffffffff91974ddb RIP: 00007f6e5eeccf47 RSP: 00007ffdd311c7d0 RFLAGS: 00000246 RAX: 00000000000000ca RBX: 00007f6e59496700 RCX: ffffffffffffffff RDX: 0000000000001308 RSI: 0000000000000000 RDI: 00007f6e594969d0 RBP: 00007f6e60552780 R8: 0000000000000000 R9: 00007f6e5e6e314d R10: 0000000000000000 R11: 0000000000000246 R12: 00007f6e59496d28 R13: 0000000000000000 R14: 0000000000000006 R15: 00007ffdd311c920 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b We have a few pending frames in this process. Reviewing the corresponding statedump: grep complete=0 glusterdump.4863.dump.1556091368 -c 7 Looking for these pending frames in the statedump: ~~~ [global.callpool.stack.1] stack=0x7f6e4007c828 uid=107 gid=107 pid=20635 unique=5518502 lk-owner=bd2351a6cc7fcb8b op=WRITE type=1 cnt=6 [global.callpool.stack.1.frame.1] frame=0x7f6dec04de38 ref_count=0 translator=vmstore2-write-behind complete=0 parent=vmstore2-open-behind wind_from=default_writev_resume wind_to=(this->children->xlator)->fops->writev unwind_to=default_writev_cbk [global.callpool.stack.1.frame.2] frame=0x7f6dec0326f8 ref_count=1 translator=vmstore2-open-behind complete=0 parent=vmstore2-md-cache wind_from=mdc_writev wind_to=(this->children->xlator)->fops->writev unwind_to=mdc_writev_cbk [global.callpool.stack.1.frame.3] frame=0x7f6dec005bf8 ref_count=1 translator=vmstore2-md-cache complete=0 parent=vmstore2-io-threads wind_from=default_writev_resume wind_to=(this->children->xlator)->fops->writev unwind_to=default_writev_cbk [global.callpool.stack.1.frame.4] frame=0x7f6e400ab0f8 ref_count=1 translator=vmstore2-io-threads complete=0 parent=vmstore2 wind_from=io_stats_writev wind_to=(this->children->xlator)->fops->writev unwind_to=io_stats_writev_cbk [global.callpool.stack.1.frame.5] frame=0x7f6e4007c6c8 ref_count=1 translator=vmstore2 complete=0 parent=fuse wind_from=fuse_write_resume wind_to=FIRST_CHILD(this)->fops->writev unwind_to=fuse_writev_cbk [global.callpool.stack.1.frame.6] frame=0x7f6e4002cb98 ref_count=1 translator=fuse complete=0 ~~~ So I believe we're pending in the 'write-behind' translator. Please, I'd need some help to figure out the cause of the hang. Thank you, Natalia -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri May 3 06:19:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 03 May 2019 06:19:14 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #645 from Worker Ant --- REVIEW: https://review.gluster.org/22652 (glusterd-utils.c: reduce work in glusterd_add_volume_to_dict()) merged (#2) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 3 06:44:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 03 May 2019 06:44:21 +0000 Subject: [Bugs] [Bug 1705884] New: Image size as reported from the fuse mount is incorrect Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705884 Bug ID: 1705884 Summary: Image size as reported from the fuse mount is incorrect Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: sharding Severity: high Assignee: bugs at gluster.org Reporter: kdhananj at redhat.com QA Contact: bugs at gluster.org CC: bugs at gluster.org, kdhananj at redhat.com, pasik at iki.fi, rhs-bugs at redhat.com, sabose at redhat.com, sankarshan at redhat.com, sasundar at redhat.com, storage-qa-internal at redhat.com Depends On: 1668001 Blocks: 1667998 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1668001 +++ Description of problem: ----------------------- The size of the VM image file as reported from the fuse mount is incorrect. For the file of size 1 TB, the size of the file on the disk is reported as 8 ZB. Version-Release number of selected component (if applicable): ------------------------------------------------------------- upstream master How reproducible: ------------------ Always Steps to Reproduce: ------------------- 1. On the Gluster storage domain, create the preallocated disk image of size 1TB 2. Check for the size of the file after its creation has succeesded Actual results: --------------- Size of the file is reported as 8 ZB, though the size of the file is 1TB Expected results: ----------------- Size of the file should be the same as the size created by the user Additional info: ---------------- Volume in the question is replica 3 sharded [root at rhsqa-grafton10 ~]# gluster volume info data Volume Name: data Type: Replicate Volume ID: 7eb49e90-e2b6-4f8f-856e-7108212dbb72 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: rhsqa-grafton10.lab.eng.blr.redhat.com:/gluster_bricks/data/data Brick2: rhsqa-grafton11.lab.eng.blr.redhat.com:/gluster_bricks/data/data Brick3: rhsqa-grafton12.lab.eng.blr.redhat.com:/gluster_bricks/data/data (arbiter) Options Reconfigured: performance.client-io-threads: on nfs.disable: on transport.address-family: inet performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off cluster.choose-local: off client.event-threads: 4 server.event-threads: 4 storage.owner-uid: 36 storage.owner-gid: 36 network.ping-timeout: 30 performance.strict-o-direct: on cluster.granular-entry-heal: enable cluster.enable-shared-storage: enable --- Additional comment from SATHEESARAN on 2019-01-21 16:32:39 UTC --- Size of the file as reported from the fuse mount: [root@ ~]# ls -lsah /rhev/data-center/mnt/glusterSD/rhsqa-grafton10.lab.eng.blr.redhat.com\:_data/bbeee86f-f174-4ec7-9ea3-a0df28709e64/images/0206953c-4850-4969-9dad-15140579d354/eaa5e81d-103c-4ce6-947e-8946806cca1b 8.0Z -rw-rw----. 1 vdsm kvm 1.1T Jan 21 17:14 /rhev/data-center/mnt/glusterSD/rhsqa-grafton10.lab.eng.blr.redhat.com:_data/bbeee86f-f174-4ec7-9ea3-a0df28709e64/images/0206953c-4850-4969-9dad-15140579d354/eaa5e81d-103c-4ce6-947e-8946806cca1b [root@ ~]# du -shc /rhev/data-center/mnt/glusterSD/rhsqa-grafton10.lab.eng.blr.redhat.com\:_data/bbeee86f-f174-4ec7-9ea3-a0df28709e64/images/0206953c-4850-4969-9dad-15140579d354/eaa5e81d-103c-4ce6-947e-8946806cca1b 16E /rhev/data-center/mnt/glusterSD/rhsqa-grafton10.lab.eng.blr.redhat.com:_data/bbeee86f-f174-4ec7-9ea3-a0df28709e64/images/0206953c-4850-4969-9dad-15140579d354/eaa5e81d-103c-4ce6-947e-8946806cca1b 16E total Note that the disk image is preallocated with 1072GB of space --- Additional comment from SATHEESARAN on 2019-04-01 19:25:15 UTC --- (In reply to SATHEESARAN from comment #5) > (In reply to Krutika Dhananjay from comment #3) > > Also, do you still have the setup in this state? If so, can I'd like to take > > a look. > > > > -Krutika > > Hi Krutika, > > The setup is no longer available. Let me recreate the issue and provide you > the setup This issue is very easily reproducible. Create a preallocated image on the replicate volume with sharding enabled. Use 'qemu-img' to create the VM image. See the following test: [root@ ~]# qemu-img create -f raw -o preallocation=falloc /mnt/test/vm1.img 1T Formatting '/mnt/test/vm1.img', fmt=raw size=1099511627776 preallocation='falloc' [root@ ]# ls /mnt/test vm1.img [root@ ]# ls -lsah vm1.img 8.0Z -rw-r--r--. 1 root root 1.0T Apr 2 00:45 vm1.img --- Additional comment from Krutika Dhananjay on 2019-04-11 06:07:35 UTC --- So I tried this locally and I am not hitting the issue - [root at dhcpxxxxx ~]# qemu-img create -f raw -o preallocation=falloc /mnt/vm1.img 10G Formatting '/mnt/vm1.img', fmt=raw size=10737418240 preallocation=falloc [root at dhcpxxxxx ~]# ls -lsah /mnt/vm1.img 10G -rw-r--r--. 1 root root 10G Apr 11 11:26 /mnt/vm1.img [root at dhcpxxxxx ~]# qemu-img create -f raw -o preallocation=falloc /mnt/vm1.img 30G Formatting '/mnt/vm1.img', fmt=raw size=32212254720 preallocation=falloc [root at dhcpxxxxx ~]# ls -lsah /mnt/vm1.img 30G -rw-r--r--. 1 root root 30G Apr 11 11:32 /mnt/vm1.img Of course, I didn't go beyond 30G due to space constraints on my laptop. If you could share your setup where you're hitting this bug, I'll take a look. -Krutika --- Additional comment from SATHEESARAN on 2019-05-02 05:21:01 UTC --- (In reply to Krutika Dhananjay from comment #7) > So I tried this locally and I am not hitting the issue - > > [root at dhcpxxxxx ~]# qemu-img create -f raw -o preallocation=falloc > /mnt/vm1.img 10G > Formatting '/mnt/vm1.img', fmt=raw size=10737418240 preallocation=falloc > [root at dhcpxxxxx ~]# ls -lsah /mnt/vm1.img > 10G -rw-r--r--. 1 root root 10G Apr 11 11:26 /mnt/vm1.img > > [root at dhcpxxxxx ~]# qemu-img create -f raw -o preallocation=falloc > /mnt/vm1.img 30G > Formatting '/mnt/vm1.img', fmt=raw size=32212254720 preallocation=falloc > [root at dhcpxxxxx ~]# ls -lsah /mnt/vm1.img > 30G -rw-r--r--. 1 root root 30G Apr 11 11:32 /mnt/vm1.img > > Of course, I didn't go beyond 30G due to space constraints on my laptop. > > If you could share your setup where you're hitting this bug, I'll take a > look. > > -Krutika I could see this very consistenly in two fashions 1. Create VM image >= 1TB -------------------------- [root at rhsqa-grafton7 test]# qemu-img create -f raw -o preallocation=falloc vm1.img 10G Formatting 'vm1.img', fmt=raw size=10737418240 preallocation=falloc [root@ ]# ls -lsah vm1.img 10G -rw-r--r--. 1 root root 10G May 2 10:30 vm1.img [root@ ]# qemu-img create -f raw -o preallocation=falloc vm2.img 50G Formatting 'vm2.img', fmt=raw size=53687091200 preallocation=falloc [root@ ]# ls -lsah vm2.img 50G -rw-r--r--. 1 root root 50G May 2 10:30 vm2.img [root@ ]# qemu-img create -f raw -o preallocation=falloc vm3.img 100G Formatting 'vm3.img', fmt=raw size=107374182400 preallocation=falloc [root@ ]# ls -lsah vm3.img 100G -rw-r--r--. 1 root root 100G May 2 10:33 vm3.img [root@ ]# qemu-img create -f raw -o preallocation=falloc vm4.img 500G Formatting 'vm4.img', fmt=raw size=536870912000 preallocation=falloc [root@ ]# ls -lsah vm4.img 500G -rw-r--r--. 1 root root 500G May 2 10:33 vm4.img Once the size reached 1TB, you will see this issue [root@ ]# qemu-img create -f raw -o preallocation=falloc vm6.img 1T Formatting 'vm6.img', fmt=raw size=1099511627776 preallocation=falloc [root@ ]# ls -lsah vm6.img 8.0Z -rw-r--r--. 1 root root 1.0T May 2 10:35 vm6.img <-------- size on disk is too much than expected 2. Recreate the image with the same name ----------------------------------------- Observe that for the second time, the image is created with the same name [root@ ]# qemu-img create -f raw -o preallocation=falloc vm1.img 10G Formatting 'vm1.img', fmt=raw size=10737418240 preallocation=falloc [root@ ]# ls -lsah vm1.img 10G -rw-r--r--. 1 root root 10G May 2 10:40 vm1.img [root@ ]# qemu-img create -f raw -o preallocation=falloc vm1.img 20G <-------- The same file name vm1.img is used Formatting 'vm1.img', fmt=raw size=21474836480 preallocation=falloc [root@ ]# ls -lsah vm1.img 30G -rw-r--r--. 1 root root 20G May 2 10:40 vm1.img <---------- size on the disk is 30G, though the file is created with 20G I will provide setup for the investigation --- Additional comment from SATHEESARAN on 2019-05-02 05:23:07 UTC --- The setup details: ------------------- rhsqa-grafton7.lab.eng.blr.redhat.com ( root/redhat ) volume: data ( replica 3, sharded ) The volume is currently mounted at: /mnt/test Note: This is the RHVH installation. @krutika, if you need more info, just ping me in IRC / google chat --- Additional comment from Krutika Dhananjay on 2019-05-02 10:16:40 UTC --- Found part of the issue. It's just a case of integer overflow. 32-bit signed int is being used to store delta between post-stat and pre-stat block-counts. The range of numbers for 32-bit signed int is [-2,147,483,648, 2,147,483,647] whereas the number of blocks allocated as part of creating a preallocated 1TB file is (1TB/512) = 2,147,483,648 which is just 1 more than INT_MAX (2,147,483,647) which spills over to the negative half the scale making it -2,147,483,648. This number, on being copied to int64 causes the most-significant 32 bits to be filled with 1 making the block-count equal 554050781183 (or 0xffffffff80000000) in magnitude. That's the block-count that gets set on the backend in trusted.glusterfs.shard.file-size xattr in the block-count segment - [root at rhsqa-grafton7 data]# getfattr -d -m . -e hex /gluster_bricks/data/data/vm3.img getfattr: Removing leading '/' from absolute path names # file: gluster_bricks/data/data/vm3.img security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.dirty=0x000000000000000000000000 trusted.gfid=0x3faffa7142b74e739f3a82b9359d33e6 trusted.gfid2path.6356251b968111ad=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f766d332e696d67 trusted.glusterfs.shard.block-size=0x0000000004000000 trusted.glusterfs.shard.file-size=0x00000100000000000000000000000000ffffffff800000000000000000000000 <-- notice the "ffffffff80000000" in the block-count segment But .. [root at rhsqa-grafton7 test]# stat vm3.img File: ?vm3.img? Size: 1099511627776 Blocks: 18446744071562067968 IO Block: 131072 regular file Device: 29h/41d Inode: 11473626732659815398 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:fusefs_t:s0 Access: 2019-05-02 14:11:11.693559069 +0530 Modify: 2019-05-02 14:12:38.245068328 +0530 Change: 2019-05-02 14:15:56.190546751 +0530 Birth: - stat shows block-count as 18446744071562067968 which is way bigger than (554050781183 * 512). In the response path, turns out the block-count further gets assigned to a uint64 number. The same number, when expressed as uint64 becomes 18446744071562067968. 18446744071562067968 * 512 is a whopping 8.0 Zettabytes! This bug wasn't seen earlier because the earlier way of preallocating files never used fallocate, so the original signed 32 int variable delta_blocks would never exceed 131072. Anyway, I'll be soon sending a fix for this. Sas, Do you have a single node with at least 1TB free space that you can lend me where I can test the fix? The bug will only be hit when the image size is > 1TB. -Krutika --- Additional comment from Krutika Dhananjay on 2019-05-02 10:18:26 UTC --- (In reply to Krutika Dhananjay from comment #10) > Found part of the issue. Sorry, this not part of the issue but THE issue in its entirety. (That line is from an older draft I'd composed which I forgot to change after rc'ing the bug) > > It's just a case of integer overflow. > 32-bit signed int is being used to store delta between post-stat and > pre-stat block-counts. > The range of numbers for 32-bit signed int is [-2,147,483,648, > 2,147,483,647] whereas the number of blocks allocated > as part of creating a preallocated 1TB file is (1TB/512) = 2,147,483,648 > which is just 1 more than INT_MAX (2,147,483,647) > which spills over to the negative half the scale making it -2,147,483,648. > This number, on being copied to int64 causes the most-significant 32 bits to > be filled with 1 making the block-count equal 554050781183 (or > 0xffffffff80000000) in magnitude. > That's the block-count that gets set on the backend in > trusted.glusterfs.shard.file-size xattr in the block-count segment - > > [root at rhsqa-grafton7 data]# getfattr -d -m . -e hex > /gluster_bricks/data/data/vm3.img > getfattr: Removing leading '/' from absolute path names > # file: gluster_bricks/data/data/vm3.img > security. > selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f7 > 43a733000 > trusted.afr.dirty=0x000000000000000000000000 > trusted.gfid=0x3faffa7142b74e739f3a82b9359d33e6 > trusted.gfid2path. > 6356251b968111ad=0x30303030303030302d303030302d303030302d303030302d3030303030 > 303030303030312f766d332e696d67 > > trusted.glusterfs.shard.block-size=0x0000000004000000 > trusted.glusterfs.shard.file- > size=0x00000100000000000000000000000000ffffffff800000000000000000000000 <-- > notice the "ffffffff80000000" in the block-count segment > > But .. > > [root at rhsqa-grafton7 test]# stat vm3.img > File: ?vm3.img? > Size: 1099511627776 Blocks: 18446744071562067968 IO Block: 131072 > regular file > Device: 29h/41d Inode: 11473626732659815398 Links: 1 > Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) > Context: system_u:object_r:fusefs_t:s0 > Access: 2019-05-02 14:11:11.693559069 +0530 > Modify: 2019-05-02 14:12:38.245068328 +0530 > Change: 2019-05-02 14:15:56.190546751 +0530 > Birth: - > > stat shows block-count as 18446744071562067968 which is way bigger than > (554050781183 * 512). > > In the response path, turns out the block-count further gets assigned to a > uint64 number. > The same number, when expressed as uint64 becomes 18446744071562067968. > 18446744071562067968 * 512 is a whopping 8.0 Zettabytes! > > This bug wasn't seen earlier because the earlier way of preallocating files > never used fallocate, so the original signed 32 int variable delta_blocks > would never exceed 131072. > > Anyway, I'll be soon sending a fix for this. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1667998 [Bug 1667998] Image size as reported from the fuse mount is incorrect https://bugzilla.redhat.com/show_bug.cgi?id=1668001 [Bug 1668001] Image size as reported from the fuse mount is incorrect -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 3 06:56:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 03 May 2019 06:56:33 +0000 Subject: [Bugs] [Bug 1654753] A distributed-disperse volume crashes when a symbolic link is renamed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654753 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com Severity|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 3 06:58:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 03 May 2019 06:58:51 +0000 Subject: [Bugs] [Bug 1705884] Image size as reported from the fuse mount is incorrect In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705884 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22655 (features/shard: Fix integer overflow in block count accounting) posted (#1) for review on master by Krutika Dhananjay -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 3 06:58:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 03 May 2019 06:58:50 +0000 Subject: [Bugs] [Bug 1705884] Image size as reported from the fuse mount is incorrect In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705884 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22655 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 3 07:23:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 03 May 2019 07:23:12 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22656 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri May 3 07:23:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 03 May 2019 07:23:13 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1617 from Worker Ant --- REVIEW: https://review.gluster.org/22656 (glusterd: prevent use-after-free in glusterd_op_ac_send_brick_op()) posted (#1) for review on master by Niels de Vos -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri May 3 09:51:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 03 May 2019 09:51:42 +0000 Subject: [Bugs] [Bug 1705351] glusterfsd crash after days of running In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705351 --- Comment #2 from waza123 at inbox.lv --- https://drive.google.com/file/d/1n2IeRNqwXYmF1q664Rvtr5RuDu5taDz9/view?usp=sharing -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 3 12:06:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 03 May 2019 12:06:55 +0000 Subject: [Bugs] [Bug 1705884] Image size as reported from the fuse mount is incorrect In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705884 SATHEESARAN changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1668001 Depends On|1668001 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1668001 [Bug 1668001] Image size as reported from the fuse mount is incorrect -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 3 22:30:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 03 May 2019 22:30:15 +0000 Subject: [Bugs] [Bug 1690769] GlusterFS 5.5 crashes in 1x4 replicate setup. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690769 --- Comment #4 from Artem Russakovskii --- Phew, this was a fun one! Long story short - after weeks of debugging with the amazing Gluster team (thanks, Amar and Xavi!), we have found the root of the problem and a solution. The crash happens on CPUs with an 'rtm' flag, in combination with slightly older versions of glibc, specifically 2.26. The bug is fixed in glibc 2.29. For example, 3 of our machines had these CPUs (run lscpu to find out): Model name: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm pti fsgsbase tsc_adjust smep erms xsaveopt arat And the one that was crashing had this one: Model name: Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat Since the version of glibc for OpenSUSE 15.0 is currently 2.26, the easiest solution was to migrate the box to a CPU without the rtm feature, which we've now done and confirmed the crash is gone. Before the migration, Xavi did find a workaround: 1. export GLIBC_TUNABLES=glibc.tune.hwcaps=-RTM 2. Unmount and remount. 3. Confirm the above worked: for i in $(pgrep glusterfs); do ps h -o cmd -p $i; cat /proc/$i/environ | xargs -0 -n 1 | grep "GLIBC_TUNABLES"; done More info about this lock elision feature, as well as a quick test program can be found here: https://sourceware.org/bugzilla/show_bug.cgi?id=23275. Here are sample runs on hardware with 'rtm' feature (crash observed) and without (no crash): gcc -pthread test.c -o test archon810 at citadel:/tmp> ./test Please add a check if lock-elision is available on your architecture. The check in check_if_lock_elision_is_available () assumes, that lock-elision is enabled! main: start 3 threads to run 2000000 iterations. #0: started #1: started #2: started .#0: pthread_mutex_destroy: failed with 16; in round=2295; Aborted archon810 at hive:/tmp> ./test Please add a check if lock-elision is available on your architecture. The check in check_if_lock_elision_is_available () assumes, that lock-elision is enabled! main: start 3 threads to run 2000000 iterations. #0: started #2: started #1: started ........................................................................................................................................................................................................main: end. Not sure how the maintainers will choose to close this issue, but I hope it'll help someone in the future, especially since we spent countless hours analyzing and debugging (hopefully, not all in vain!). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 4 07:23:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 04 May 2019 07:23:48 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22659 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 4 07:23:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 04 May 2019 07:23:49 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #646 from Worker Ant --- REVIEW: https://review.gluster.org/22659 ([WIP]glusterd-utils.c: reduce some work) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun May 5 15:52:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 05 May 2019 15:52:18 +0000 Subject: [Bugs] [Bug 1706603] New: Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706603 Bug ID: 1706603 Summary: Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT Product: GlusterFS Version: mainline Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: pkarampu at redhat.com Target Milestone: --- Group: private Classification: Community Mount crashing in an 'ASSERT' that checks the inode size in the function ec-inode-write.c Program terminated with signal 11, Segmentation fault. #0 0x00007f5502715dcb in ec_manager_truncate (fop=0x7f53ff654910, state=) at ec-inode-write.c:1475 1475 GF_ASSERT(ec_get_inode_size(fop, fop->locks[0].lock->loc.inode, This is the corresponding thread: Thread 1 (Thread 0x7f54f907a700 (LWP 31806)): #0 0x00007f5502715dcb in ec_manager_truncate (fop=0x7f53ff654910, state=) at ec-inode-write.c:1475 #1 0x00007f55026f399b in __ec_manager (fop=0x7f53ff654910, error=0) at ec-common.c:2698 #2 0x00007f55026f3b78 in ec_resume (fop=0x7f53ff654910, error=0) at ec-common.c:481 #3 0x00007f55026f3c9f in ec_complete (fop=0x7f53ff654910) at ec-common.c:554 #4 0x00007f5502711d0c in ec_inode_write_cbk (frame=, this=0x7f54fc186380, cookie=0x3, op_ret=0, op_errno=0, prestat=0x7f54f9079920, poststat=0x7f54f9079990, xdata=0x0) at ec-inode-write.c:156 #5 0x00007f550298224c in client3_3_ftruncate_cbk (req=, iov=, count=, myframe=0x7f5488ba7870) at client-rpc-fops.c:1415 #6 0x00007f5510476960 in rpc_clnt_handle_reply (clnt=clnt at entry=0x7f54fc4a1330, pollin=pollin at entry=0x7f549b65dc30) at rpc-clnt.c:778 #7 0x00007f5510476d03 in rpc_clnt_notify (trans=, mydata=0x7f54fc4a1360, event=, data=0x7f549b65dc30) at rpc-clnt.c:971 #8 0x00007f5510472a73 in rpc_transport_notify (this=this at entry=0x7f54fc4a1500, event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7f549b65dc30) at rpc-transport.c:538 #9 0x00007f5505067566 in socket_event_poll_in (this=this at entry=0x7f54fc4a1500, notify_handled=) at socket.c:2315 #10 0x00007f5505069b0c in socket_event_handler (fd=90, idx=99, gen=472, data=0x7f54fc4a1500, poll_in=1, poll_out=0, poll_err=0) at socket.c:2467 #11 0x00007f551070c7e4 in event_dispatch_epoll_handler (event=0x7f54f9079e80, event_pool=0x5625cf18aa30) at event-epoll.c:583 #12 event_dispatch_epoll_worker (data=0x7f54fc296580) at event-epoll.c:659 #13 0x00007f550f50ddd5 in start_thread (arg=0x7f54f907a700) at pthread_create.c:307 #14 0x00007f550edd5ead in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 We're crashing in this part of the code, specifically: (gdb) l 1470 1471 /* This shouldn't fail because we have the inode locked. */ 1472 /* Inode size doesn't need to be updated under locks, because 1473 * conflicting operations won't be in-flight 1474 */ 1475 GF_ASSERT(ec_get_inode_size(fop, fop->locks[0].lock->loc.inode, 1476 &cbk->iatt[0].ia_size)); 1477 cbk->iatt[1].ia_size = fop->user_size; 1478 /* This shouldn't fail because we have the inode locked. */ 1479 GF_ASSERT(ec_set_inode_size(fop, fop->locks[0].lock->loc.inode, (gdb) p *cbk $7 = {list = {next = 0x7f53ff654950, prev = 0x7f53ff654950}, answer_list = {next = 0x7f53ff654960, prev = 0x7f53ff654960}, fop = 0x7f53ff654910, next = 0x0, idx = 3, op_ret = 0, op_errno = 0, count = 1, mask = 8, xdata = 0x0, dict = 0x0, int32 = 0, uintptr = {0, 0, 0}, size = 0, version = {0, 0}, inode = 0x0, fd = 0x0, statvfs = {f_bsize = 0, f_frsize = 0, f_blocks = 0, f_bfree = 0, f_bavail = 0, f_files = 0, f_ffree = 0, f_favail = 0, f_fsid = 0, f_flag = 0, f_namemax = 0, __f_spare = {0, 0, 0, 0, 0, 0}}, iatt = {{ia_ino = 12285952560967103824, ia_gfid = "\337\b\247-\b\344F?\200x?\276\265P", ia_dev = 2224, ia_type = IA_IFREG, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 1 '\001', write = 1 '\001', exec = 1 '\001'}, group = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}, other = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}}, ia_nlink = 1, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 491520, ia_blksize = 4096, ia_blocks = 3840, ia_atime = 1557032019, ia_atime_nsec = 590833985, ia_mtime = 1557032498, ia_mtime_nsec = 824769499, ia_ctime = 1557032498, ia_ctime_nsec = 824769499}, {ia_ino = 12285952560967103824, ia_gfid = "\337\b\247-\b\344F?\200x?\276\265P", ia_dev = 2224, ia_type = IA_IFREG, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 1 '\001', write = 1 '\001', exec = 1 '\001'}, group = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}, other = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}}, ia_nlink = 1, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, ia_blksize = 4096, ia_blocks = 0, ia_atime = 1557032019, ia_atime_nsec = 590833985, ia_mtime = 1557032498, ia_mtime_nsec = 824769499, ia_ctime = 1557032498, ia_ctime_nsec = 824769499}, {ia_ino = 0, ia_gfid = '\000' , ia_dev = 0, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}, {ia_ino = 0, ia_gfid = '\000' , ia_dev = 0, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = { read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}, {ia_ino = 0, ia_gfid = '\000' , ia_dev = 0, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}}, flock = {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = {len = 0, data = '\000' }}, vector = 0x0, buffers = 0x0, str = 0x0, entries = {{list = {next = 0x7f54429a3188, prev = 0x7f54429a3188}, {next = 0x7f54429a3188, prev = 0x7f54429a3188}}, d_ino = 0, d_off = 0, d_len = 0, d_type = 0, d_stat = {ia_ino = 0, ia_gfid = '\000' , ia_dev = 0, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}, dict = 0x0, inode = 0x0, d_name = 0x7f54429a3230 ""}, offset = 0, what = GF_SEEK_DATA} (gdb) p *cbk->fop $8 = {id = 24, refs = 3, state = 4, minimum = 1, expected = 1, winds = 0, jobs = 1, error = 0, parent = 0x7f532c197d80, xl = 0x7f54fc186380, req_frame = 0x7f532c048c60, frame = 0x7f54700662d0, cbk_list = { next = 0x7f54429a2a10, prev = 0x7f54429a2a10}, answer_list = {next = 0x7f54429a2a20, prev = 0x7f54429a2a20}, pending_list = {next = 0x7f533007acc0, prev = 0x7f5477976ac0}, answer = 0x7f54429a2a10, lock_count = 0, locked = 0, locks = {{lock = 0x0, fop = 0x0, owner_list = {next = 0x7f53ff6549a0, prev = 0x7f53ff6549a0}, wait_list = {next = 0x7f53ff6549b0, prev = 0x7f53ff6549b0}, update = {_gf_false, _gf_false}, dirty = {_gf_false, _gf_false}, optimistic_changelog = _gf_false, base = 0x0, size = 0, waiting_flags = 0, fl_start = 0, fl_end = 0}, {lock = 0x0, fop = 0x0, owner_list = {next = 0x7f53ff654a10, prev = 0x7f53ff654a10}, wait_list = { next = 0x7f53ff654a20, prev = 0x7f53ff654a20}, update = {_gf_false, _gf_false}, dirty = {_gf_false, _gf_false}, optimistic_changelog = _gf_false, base = 0x0, size = 0, waiting_flags = 0, fl_start = 0, fl_end = 0}}, first_lock = 0, lock = {spinlock = 0, mutex = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' , __align = 0}}, flags = 0, first = 0, mask = 8, healing = 0, remaining = 0, received = 8, good = 8, uid = 0, gid = 0, wind = 0x7f5502710ae0 , handler = 0x7f5502715c50 , resume = 0x0, cbks = {access = 0x7f550270af50 , create = 0x7f550270af50 , discard = 0x7f550270af50 , entrylk = 0x7f550270af50 , fentrylk = 0x7f550270af50 , fallocate = 0x7f550270af50 , flush = 0x7f550270af50 , fsync = 0x7f550270af50 , fsyncdir = 0x7f550270af50 , getxattr = 0x7f550270af50 , fgetxattr = 0x7f550270af50 , heal = 0x7f550270af50 , fheal = 0x7f550270af50 , inodelk = 0x7f550270af50 , finodelk = 0x7f550270af50 , link = 0x7f550270af50 , lk = 0x7f550270af50 , lookup = 0x7f550270af50 , mkdir = 0x7f550270af50 , mknod = 0x7f550270af50 , open = 0x7f550270af50 , opendir = 0x7f550270af50 , readdir = 0x7f550270af50 , readdirp = 0x7f550270af50 , readlink = 0x7f550270af50 , readv = 0x7f550270af50 , removexattr = 0x7f550270af50 , fremovexattr = 0x7f550270af50 , rename = 0x7f550270af50 , rmdir = 0x7f550270af50 , setattr = 0x7f550270af50 , fsetattr = 0x7f550270af50 , setxattr = 0x7f550270af50 , fsetxattr = 0x7f550270af50 , stat = 0x7f550270af50 , fstat = 0x7f550270af50 , statfs = 0x7f550270af50 , symlink = 0x7f550270af50 , truncate = 0x7f550270af50 , ftruncate = 0x7f550270af50 , unlink = 0x7f550270af50 , writev = 0x7f550270af50 , xattrop = 0x7f550270af50 , fxattrop = 0x7f550270af50 , zerofill = 0x7f550270af50 , seek = 0x7f550270af50 , ipc = 0x7f550270af50 }, data = 0x7f5477976a60, heal = 0x0, healer = {next = 0x7f53ff654b08, prev = 0x7f53ff654b08}, user_size = 0, head = 0, use_fd = 1, xdata = 0x0, dict = 0x0, int32 = 0, uint32 = 0, size = 0, offset = 0, mode = {0, 0}, entrylk_cmd = ENTRYLK_LOCK, entrylk_type = ENTRYLK_RDLCK, xattrop_flags = GF_XATTROP_ADD_ARRAY, dev = 0, inode = 0x0, fd = 0x7f54dfb900a0, iatt = {ia_ino = 0, ia_gfid = '\000' , ia_dev = 0, ia_type = IA_INVAL, ia_prot = { suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}, str = {0x0, 0x0}, loc = {{path = 0x0, name = 0x0, inode = 0x0, parent = 0x0, gfid = '\000' , pargfid = '\000' }, {path = 0x0, name = 0x0, inode = 0x0, parent = 0x0, gfid = '\000' , pargfid = '\000' }}, flock = {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = {len = 0, data = '\000' }}, vector = 0x0, buffers = 0x0, seek = GF_SEEK_DATA, errstr = 0x0} Checking further the lock: (gdb) p fop->locks[0] $5 = {lock = 0x0, fop = 0x0, owner_list = {next = 0x7f53ff6549a0, prev = 0x7f53ff6549a0}, wait_list = {next = 0x7f53ff6549b0, prev = 0x7f53ff6549b0}, update = {_gf_false, _gf_false}, dirty = {_gf_false, _gf_false}, optimistic_changelog = _gf_false, base = 0x0, size = 0, waiting_flags = 0, fl_start = 0, fl_end = 0} (gdb) p fop->locks[0].lock $6 = (ec_lock_t *) 0x0 (gdb) p fop->locks[0].lock->loc.inode Cannot access memory at address 0x90 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Sun May 5 17:02:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 05 May 2019 17:02:56 +0000 Subject: [Bugs] [Bug 1706603] Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706603 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Group|private | -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 00:01:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 00:01:56 +0000 Subject: [Bugs] [Bug 1706603] Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706603 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22660 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 00:01:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 00:01:57 +0000 Subject: [Bugs] [Bug 1706603] Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706603 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-06 00:01:57 --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22660 (cluster/ec: Reopen shouldn't happen with O_TRUNC) merged (#1) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 04:08:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 04:08:46 +0000 Subject: [Bugs] [Bug 1706683] New: Enable enable fips-mode-rchecksum for new volumes by default Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706683 Bug ID: 1706683 Summary: Enable enable fips-mode-rchecksum for new volumes by default Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: glusterd Keywords: Triaged Assignee: amukherj at redhat.com Reporter: ravishankar at redhat.com QA Contact: bmekala at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1702303 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1702303 +++ Description of problem: fips-mode-rchecksum option was provided in GD_OP_VERSION_4_0_0 to maintain backward compatibility with older AFR so that a cluster operating at an op version of less than GD_OP_VERSION_4_0_0 used MD5SUM instead of the SHA256 that would be used if this option was enabled. But in a freshly created setup with cluster op-version >=GD_OP_VERSION_4_0_0, we can directly go ahead and use SHA256 without asking the admin to explicitly set the volume option 'on'. In fact in downstream, this created quite a bit of confusion when QE would created a new glusterfs setup on a FIPS enabled machine and would try out self-heal test cases (without setting 'fips-mode-rchecksum' on), leading to crashes due to non-compliance. Ideally this fix should have been done as a part of the original commit: "6daa65356 - posix/afr: handle backward compatibility for rchecksum fop" but I guess it is better late than never. --- Additional comment from Worker Ant on 2019-04-26 08:23:27 UTC --- REVIEW: https://review.gluster.org/22609 (glusterd: enable fips-mode-rchecksum for new volumes) merged (#4) on master by Atin Mukherjee Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1702303 [Bug 1702303] Enable enable fips-mode-rchecksum for new volumes by default -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 04:08:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 04:08:46 +0000 Subject: [Bugs] [Bug 1702303] Enable enable fips-mode-rchecksum for new volumes by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702303 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1706683 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1706683 [Bug 1706683] Enable enable fips-mode-rchecksum for new volumes by default -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 04:08:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 04:08:50 +0000 Subject: [Bugs] [Bug 1706683] Enable enable fips-mode-rchecksum for new volumes by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706683 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 04:10:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 04:10:05 +0000 Subject: [Bugs] [Bug 1706683] Enable enable fips-mode-rchecksum for new volumes by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706683 --- Comment #2 from Ravishankar N --- Note: In upstream, the fix was tied to GD_OP_VERSION_7_0. We might need to use the right op-version in the downstream backport. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 04:10:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 04:10:38 +0000 Subject: [Bugs] [Bug 1706683] Enable enable fips-mode-rchecksum for new volumes by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706683 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|amukherj at redhat.com |ravishankar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 05:18:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 05:18:30 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22661 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 05:18:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 05:18:30 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1618 from Worker Ant --- REVIEW: https://review.gluster.org/22661 (glusterd: coverity fix) posted (#1) for review on master by Atin Mukherjee -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 07:00:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 07:00:44 +0000 Subject: [Bugs] [Bug 1214644] Upcall: Migrate state during rebalance/tiering In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1214644 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WONTFIX Flags|needinfo?(skoduri at redhat.co | |m) | Last Closed| |2019-05-06 07:00:44 --- Comment #4 from Soumya Koduri --- This is a Day1 issue (i.e, rebalance and self-healing does not happen for most of the state maintained at the server-side) and there are no plans to address it in the near future. Hence closing the bug. -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 07:01:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 07:01:09 +0000 Subject: [Bugs] [Bug 1706716] New: glusterd generated core while running ./tests/bugs/cli/bug-1077682.t Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706716 Bug ID: 1706716 Summary: glusterd generated core while running ./tests/bugs/cli/bug-1077682.t Product: GlusterFS Version: mainline Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: srakonde at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: glusterd generated a core file while running ./tests/bugs/cli/bug-1077682.t in centos running. Core and logs can be found in https://build.gluster.org/job/centos7-regression/5857/consoleFull -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 07:01:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 07:01:16 +0000 Subject: [Bugs] [Bug 1214654] Self-heal: Migrate lease_locks as part of self-heal process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1214654 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WONTFIX Flags|needinfo?(skoduri at redhat.co | |m) | Last Closed| |2019-05-06 07:01:16 --- Comment #2 from Soumya Koduri --- This is a Day1 issue (i.e, rebalance and self-healing does not happen for most of the state maintained at the server-side) and there are no plans to address it in the near future. Hence closing the bug. -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 07:02:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 07:02:54 +0000 Subject: [Bugs] [Bug 1706683] Enable enable fips-mode-rchecksum for new volumes by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706683 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #3 from Ravishankar N --- (In reply to Ravishankar N from comment #2) > Note: In upstream, the fix was tied to GD_OP_VERSION_7_0. We might need to > use the right op-version in the downstream backport. rhgs-3.5.0 also uses GD_OP_VERSION_7_0 for the maximum op-version, so the patch is a straight forward back-port: https://code.engineering.redhat.com/gerrit/#/c/169443/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 07:02:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 07:02:14 +0000 Subject: [Bugs] [Bug 1706716] glusterd generated core while running ./tests/bugs/cli/bug-1077682.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706716 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |srakonde at redhat.com --- Comment #1 from Sanju --- s/"centos running"/"centos regression" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 07:07:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 07:07:21 +0000 Subject: [Bugs] [Bug 1705351] glusterfsd crash after days of running In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705351 --- Comment #3 from Xavi Hernandez --- Thanks for the sharing the coredump. I'll take a look as soon as I can. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 09:41:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 09:41:15 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22664 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 09:41:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 09:41:16 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #35 from Worker Ant --- REVIEW: https://review.gluster.org/22664 (glusterd/tier: remove tier related code from glusterd) posted (#1) for review on master by hari gowtham -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 09:49:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 09:49:59 +0000 Subject: [Bugs] [Bug 1706716] glusterd generated core while running ./tests/bugs/cli/bug-1077682.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706716 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com --- Comment #2 from Atin Mukherjee --- (gdb) bt #0 0x00007fa56532b207 in raise () from ./lib64/libc.so.6 #1 0x00007fa56532c8f8 in abort () from ./lib64/libc.so.6 #2 0x00007fa56536dd27 in __libc_message () from ./lib64/libc.so.6 #3 0x00007fa56536de0e in __libc_fatal () from ./lib64/libc.so.6 #4 0x00007fa56536e183 in _IO_vtable_check () from ./lib64/libc.so.6 #5 0x00007fa565372c9b in _IO_cleanup () from ./lib64/libc.so.6 #6 0x00007fa56532eb1b in __run_exit_handlers () from ./lib64/libc.so.6 #7 0x00007fa56532ebb7 in exit () from ./lib64/libc.so.6 #8 0x0000000000409485 in cleanup_and_exit (signum=15) at /home/jenkins/root/workspace/centos7-regression/glusterfsd/src/glusterfsd.c:1659 #9 0x000000000040b093 in glusterfs_sigwaiter (arg=0x7fff6cf56250) at /home/jenkins/root/workspace/centos7-regression/glusterfsd/src/glusterfsd.c:2421 #10 0x00007fa565b2bdd5 in start_thread () from ./lib64/libpthread.so.0 #11 0x00007fa5653f2ead in clone () from ./lib64/libc.so.6 (gdb) t a a bt Thread 9 (LWP 15288): #0 0x00007fa565b32e3d in nanosleep () from ./lib64/libpthread.so.0 #1 0x00007fa566d11c77 in gf_timer_proc (data=0xe569d0) at /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/timer.c:194 #2 0x00007fa565b2bdd5 in start_thread () from ./lib64/libpthread.so.0 #3 0x00007fa5653f2ead in clone () from ./lib64/libc.so.6 Thread 8 (LWP 15287): #0 0x00007fa565b2cf47 in pthread_join () from ./lib64/libpthread.so.0 #1 0x00007fa566d7be2b in event_dispatch_epoll (event_pool=0xe4ee40) at /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/event-epoll.c:846 #2 0x00007fa566d38405 in event_dispatch (event_pool=0xe4ee40) at /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/event.c:116 #3 0x000000000040c019 in main (argc=1, argv=0x7fff6cf574a8) at /home/jenkins/root/workspace/centos7-regression/glusterfsd/src/glusterfsd.c:2917 Thread 7 (LWP 15321): #0 0x00007fa565b2f965 in pthread_cond_wait@@GLIBC_2.3.2 () from ./lib64/libpthread.so.0 #1 0x00007fa55aeb1b50 in hooks_worker (args=0xe61d70) at /home/jenkins/root/workspace/centos7-regression/xlators/mgmt/glusterd/src/glusterd-hooks.c:527 #2 0x00007fa565b2bdd5 in start_thread () from ./lib64/libpthread.so.0 #3 0x00007fa5653f2ead in clone () from ./lib64/libc.so.6 Thread 6 (LWP 15293): #0 0x00007fa5653e9f73 in select () from ./lib64/libc.so.6 #1 0x00007fa566d9a526 in runner (arg=0xe5b3c0) at /home/jenkins/root/workspace/centos7-regression/contrib/timer-wheel/timer-wheel.c:186 #2 0x00007fa565b2bdd5 in start_thread () from ./lib64/libpthread.so.0 #3 0x00007fa5653f2ead in clone () from ./lib64/libc.so.6 Thread 5 (LWP 15290): #0 0x00007fa5653b9e2d in nanosleep () from ./lib64/libc.so.6 #1 0x00007fa5653b9cc4 in sleep () from ./lib64/libc.so.6 #2 0x00007fa566d399f8 in pool_sweeper (arg=0x0) at /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/mem-pool.c:446 #3 0x00007fa565b2bdd5 in start_thread () from ./lib64/libpthread.so.0 #4 0x00007fa5653f2ead in clone () from ./lib64/libc.so.6 Thread 4 (LWP 15322): #0 0x00007fa565b324ed in __lll_lock_wait () from ./lib64/libpthread.so.0 #1 0x00007fa565b2ddcb in _L_lock_883 () from ./lib64/libpthread.so.0 #2 0x00007fa565b2dc98 in pthread_mutex_lock () from ./lib64/libpthread.so.0 #3 0x00007fa55aee982e in gd_peerinfo_find_from_hostname (hoststr=0x7fa548007520 "builder204.int.aws.gluster.org") at /home/jenkins/root/workspace/centos7-regression/xlators/mgmt/glusterd/src/glusterd-peer-utils.c:650 #4 0x00007fa55aee7935 in glusterd_peerinfo_find_by_hostname (hoststr=0x7fa548007520 "builder204.int.aws.gluster.org") at /home/jenkins/root/workspace/centos7-regression/xlators/mgmt/glusterd/src/glusterd-peer-utils.c:112 #5 0x00007fa55aee7b77 in glusterd_hostname_to_uuid (hostname=0x7fa548007520 "builder204.int.aws.gluster.org", uuid=0x7fa557cd2ab0 "") at /home/jenkins/root/workspace/centos7-regression/xlators/mgmt/glusterd/src/glusterd-peer-utils.c:154 #6 0x00007fa55ae024b7 in glusterd_volume_brickinfo_get (uuid=0x0, hostname=0x7fa548007520 "builder204.int.aws.gluster.org", path=0x7fa54800761f "/d/backends/patchy4", volinfo=0xed3760, brickinfo=0x7fa557cd5c10) at /home/jenkins/root/workspace/centos7-regression/xlators/mgmt/glusterd/src/glusterd-utils.c:1611 #7 0x00007fa55ae0273e in glusterd_volume_brickinfo_get_by_brick (brick=0x7fa548001bd5 "builder204.int.aws.gluster.org:/d/backends/patchy4", volinfo=0xed3760, brickinfo=0x7fa557cd5c10, construct_real_path=false) at /home/jenkins/root/workspace/centos7-regression/xlators/mgmt/glusterd/src/glusterd-utils.c:1655 #8 0x00007fa55addfb0f in get_brickinfo_from_brickid ( brickid=0x7fa550004640 "a2393496-9716-4c17-a016-8512a0b911a7:builder204.int.aws.gluster.org:/d/backends/patchy4", brickinfo=0x7fa557cd5c10) at /home/jenkins/root/workspace/centos7-regression/xlators/mgmt/glusterd/src/glusterd-handler.c:6010 --Type for more, q to quit, c to continue without paging-- #9 0x00007fa55addfbf6 in __glusterd_brick_rpc_notify (rpc=0x7fa550004700, mydata=0x7fa550004640, event=RPC_CLNT_DISCONNECT, data=0x0) at /home/jenkins/root/workspace/centos7-regression/xlators/mgmt/glusterd/src/glusterd-handler.c:6044 #10 0x00007fa55adccdb1 in glusterd_big_locked_notify (rpc=0x7fa550004700, mydata=0x7fa550004640, event=RPC_CLNT_DISCONNECT, data=0x0, notify_fn=0x7fa55addfb32 <__glusterd_brick_rpc_notify>) at /home/jenkins/root/workspace/centos7-regression/xlators/mgmt/glusterd/src/glusterd-handler.c:66 #11 0x00007fa55ade0581 in glusterd_brick_rpc_notify (rpc=0x7fa550004700, mydata=0x7fa550004640, event=RPC_CLNT_DISCONNECT, data=0x0) at /home/jenkins/root/workspace/centos7-regression/xlators/mgmt/glusterd/src/glusterd-handler.c:6199 #12 0x00007fa566a9f668 in rpc_clnt_handle_disconnect (clnt=0x7fa550004700, conn=0x7fa550004730) at /home/jenkins/root/workspace/centos7-regression/rpc/rpc-lib/src/rpc-clnt.c:826 #13 0x00007fa566a9f927 in rpc_clnt_notify (trans=0x7fa550004a80, mydata=0x7fa550004730, event=RPC_TRANSPORT_DISCONNECT, data=0x7fa550004a80) at /home/jenkins/root/workspace/centos7-regression/rpc/rpc-lib/src/rpc-clnt.c:887 #14 0x00007fa566a9ba5b in rpc_transport_notify (this=0x7fa550004a80, event=RPC_TRANSPORT_DISCONNECT, data=0x7fa550004a80) at /home/jenkins/root/workspace/centos7-regression/rpc/rpc-lib/src/rpc-transport.c:549 #15 0x00007fa559fedd8e in socket_event_poll_err (this=0x7fa550004a80, gen=1, idx=3) at /home/jenkins/root/workspace/centos7-regression/rpc/rpc-transport/socket/src/socket.c:1385 #16 0x00007fa559ff3f17 in socket_event_handler (fd=7, idx=3, gen=1, data=0x7fa550004a80, poll_in=1, poll_out=4, poll_err=16, event_thread_died=0 '\000') at /home/jenkins/root/workspace/centos7-regression/rpc/rpc-transport/socket/src/socket.c:3025 #17 0x00007fa566d7b680 in event_dispatch_epoll_handler (event_pool=0xe4ee40, event=0x7fa557cd6140) at /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/event-epoll.c:648 #18 0x00007fa566d7bb99 in event_dispatch_epoll_worker (data=0xeed3e0) at /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/event-epoll.c:761 #19 0x00007fa565b2bdd5 in start_thread () from ./lib64/libpthread.so.0 #20 0x00007fa5653f2ead in clone () from ./lib64/libc.so.6 Thread 3 (LWP 15292): #0 0x00007fa565b2fd12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from ./lib64/libpthread.so.0 #1 0x00007fa566d51f02 in syncenv_task (proc=0xe57600) at /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/syncop.c:517 #2 0x00007fa566d520f7 in syncenv_processor (thdata=0xe57600) at /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/syncop.c:584 #3 0x00007fa565b2bdd5 in start_thread () from ./lib64/libpthread.so.0 #4 0x00007fa5653f2ead in clone () from ./lib64/libc.so.6 Thread 2 (LWP 15291): #0 0x00007fa565b2fd12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from ./lib64/libpthread.so.0 #1 0x00007fa566d51f02 in syncenv_task (proc=0xe57240) at /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/syncop.c:517 #2 0x00007fa566d520f7 in syncenv_processor (thdata=0xe57240) at /home/jenkins/root/workspace/centos7-regression/libglusterfs/src/syncop.c:584 #3 0x00007fa565b2bdd5 in start_thread () from ./lib64/libpthread.so.0 #4 0x00007fa5653f2ead in clone () from ./lib64/libc.so.6 Thread 1 (LWP 15289): #0 0x00007fa56532b207 in raise () from ./lib64/libc.so.6 #1 0x00007fa56532c8f8 in abort () from ./lib64/libc.so.6 #2 0x00007fa56536dd27 in __libc_message () from ./lib64/libc.so.6 #3 0x00007fa56536de0e in __libc_fatal () from ./lib64/libc.so.6 #4 0x00007fa56536e183 in _IO_vtable_check () from ./lib64/libc.so.6 #5 0x00007fa565372c9b in _IO_cleanup () from ./lib64/libc.so.6 #6 0x00007fa56532eb1b in __run_exit_handlers () from ./lib64/libc.so.6 #7 0x00007fa56532ebb7 in exit () from ./lib64/libc.so.6 #8 0x0000000000409485 in cleanup_and_exit (signum=15) at /home/jenkins/root/workspace/centos7-regression/glusterfsd/src/glusterfsd.c:1659 #9 0x000000000040b093 in glusterfs_sigwaiter (arg=0x7fff6cf56250) at /home/jenkins/root/workspace/centos7-regression/glusterfsd/src/glusterfsd.c:2421 #10 0x00007fa565b2bdd5 in start_thread () from ./lib64/libpthread.so.0 #11 0x00007fa5653f2ead in clone () from ./lib64/libc.so.6 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 05:18:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 05:18:31 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1619 from Worker Ant --- REVIEW: https://review.gluster.org/22656 (glusterd: prevent use-after-free in glusterd_op_ac_send_brick_op()) merged (#2) on master by Atin Mukherjee -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 10:33:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 10:33:23 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22665 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 10:33:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 10:33:24 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #647 from Worker Ant --- REVIEW: https://review.gluster.org/22665 (libglusterfs: Fix compilation when --disable-mempool is used) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 10:49:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 10:49:43 +0000 Subject: [Bugs] [Bug 1705884] Image size as reported from the fuse mount is incorrect In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705884 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22655 (features/shard: Fix integer overflow in block count accounting) merged (#2) on master by Xavi Hernandez -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 10:56:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 10:56:47 +0000 Subject: [Bugs] [Bug 1703020] The cluster.heal-timeout option is unavailable for ec volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703020 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-06 10:56:47 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22622 (cluster/ec: fix shd healer wait timeout) merged (#2) on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 11:39:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 11:39:59 +0000 Subject: [Bugs] [Bug 1706842] New: Hard Failover with Samba and Glusterfs fails Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706842 Bug ID: 1706842 Summary: Hard Failover with Samba and Glusterfs fails Product: GlusterFS Version: 5 Status: NEW Component: gluster-smb Assignee: bugs at gluster.org Reporter: david.spisla at iternity.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Created attachment 1564378 --> https://bugzilla.redhat.com/attachment.cgi?id=1564378&action=edit Backtrace of the SMBD and GLUSTER communication Description of problem: I have this setup: 4-Node Glusterfs v5.5 Cluster, using SAMBA/CTDB v4.8 to access the volumes via vfs-glusterfs-plugin (each node has a VIP) I was testing this failover scenario: 1. Start Writing 940 GB with small files (64K-100K)from a Win10 Client to node1 2. During the write process I hardly shutdown node1 (where the client is connect via VIP) by turn off the power My expectation is, that the write process stops and after a while the Win10 Client offers me a Retry, so I can continue the write on different node (which has now the VIP of node1). In past time I did this observation (with Gluster v3.12), but now the system shows a strange bahaviour: The Win10 Client do nothing and the Explorer freezes, in the backend CTDB can not perform the failover and throws errors. The glusterd from node2 and node3 logs this messages: [2019-04-16 14:47:31.828323] W [glusterd-locks.c:795:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0x24349) [0x7f1a62fcb349] -->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0x2d950) [0x7f1a62fd4950] -->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe0359) [0x7f1a63087359] ) 0-management: Lock for vol archive1 not held [2019-04-16 14:47:31.828350] W [MSGID: 106117] [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not released for archive1 [2019-04-16 14:47:31.828369] W [glusterd-locks.c:795:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0x24349) [0x7f1a62fcb349] -->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0x2d950) [0x7f1a62fd4950] -->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe0359) [0x7f1a63087359] ) 0-management: Lock for vol archive2 not held [2019-04-16 14:47:31.828376] W [MSGID: 106117] [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not released for archive2 [2019-04-16 14:47:31.828412] W [glusterd-locks.c:795:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0x24349) [0x7f1a62fcb349] -->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0x2d950) [0x7f1a62fd4950] -->/usr/lib64/glusterfs/5.5/xlator/mgmt/glusterd.so(+0xe0359) [0x7f1a63087359] ) 0-management: Lock for vol gluster_shared_storage not held [2019-04-16 14:47:31.828423] W [MSGID: 106117] [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not released for gluster_shared_storage In my oponion Samba/CTDB can not perform the failover correctly and continue the write process because glusterfs didn't released the lock. But its not clear to me Additional info: I made a network trace on the Windows machine. There it is visible that the client tries several times a TreeConnect. This Tree Connect is the connection to a share. Samba answers this attempt with NT_STATUS_UNSUCCESSFUL, which was unfortunately a not very meaningful message. Similarly, I "caught" the smbd in the debugger and was able to pull a backtrace while hangs in the futex-call we found in / proc / / stack. The backtrace smbd-gluster-bt.txt (attached) shows that the smbd hangs in the gluster module. You can see in Frame 9 that Samba is hanging in the TCON (smbd_smb2_tree_connect). In frame 2 the function appears glfs_init () whose call you can find in source3 / modules / vfs_glusterfs.c, line 342 (in samba master). Then comes another frame in the gluster-lib and then immediately the pthread_condwait call, which ends up in the kernel in a futex call (see / proc / / stack). Quintessence: Samba is waiting for gluster, and obviously pretty much 3 seconds. Gluster then gives an error and the client tries again. And obviously for 8 minutes. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 11:43:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 11:43:21 +0000 Subject: [Bugs] [Bug 1706842] Hard Failover with Samba and Glusterfs fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706842 --- Comment #1 from david.spisla at iternity.com --- Here is the Volume configuration: Volume Name: archive1 Type: Replicate Volume ID: 0ed37705-e817-49c6-95c8-32f4931b597a Status: Started Snapshot Count: 0 Number of Bricks: 1 x 4 = 4 Transport-type: tcp Bricks: Brick1: fs-sernet-c2-n1:/gluster/brick1/glusterbrick Brick2: fs-sernet-c2-n2:/gluster/brick1/glusterbrick Brick3: fs-sernet-c2-n3:/gluster/brick1/glusterbrick Brick4: fs-sernet-c2-n4:/gluster/brick1/glusterbrick Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet user.smb: disable features.read-only: off features.worm: off features.worm-file-level: on features.retention-mode: enterprise features.default-retention-period: 120 network.ping-timeout: 10 features.cache-invalidation: on features.cache-invalidation-timeout: 600 performance.nl-cache: on performance.nl-cache-timeout: 600 client.event-threads: 32 server.event-threads: 32 cluster.lookup-optimize: on performance.stat-prefetch: on performance.cache-invalidation: on performance.md-cache-timeout: 600 performance.cache-samba-metadata: on performance.cache-ima-xattrs: on performance.io-thread-count: 64 cluster.use-compound-fops: on performance.cache-size: 512MB performance.cache-refresh-timeout: 10 performance.read-ahead: off performance.write-behind-window-size: 4MB performance.write-behind: on storage.build-pgfid: on features.utime: on storage.ctime: on cluster.quorum-type: fixed cluster.quorum-count: 2 features.bitrot: on features.scrub: Active features.scrub-freq: daily cluster.enable-shared-storage: enable -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 13:02:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 13:02:09 +0000 Subject: [Bugs] [Bug 1706893] New: Volume stop when quorum not met is successful Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706893 Bug ID: 1706893 Summary: Volume stop when quorum not met is successful Product: Red Hat Gluster Storage Version: rhgs-3.5 Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Keywords: Triaged Severity: medium Assignee: amukherj at redhat.com Reporter: kiyer at redhat.com QA Contact: bmekala at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, rhs-bugs at redhat.com, risjain at redhat.com, sankarshan at redhat.com, srakonde at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1690753 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1690753 +++ Description of problem: On a 2 node cluster(N1 &N2), create one volume of type distributed. Now set cluster.server-quorum-ratio to 90% and set cluster.server-quorum-type to server. Start the volume and stop glusterd on one of the node. Now if you try to stop the volume the volumes stops successfully but ideally it shouldn't stop. How reproducible: 5/5 Steps to Reproduce: 1. Create a cluster with 2 nodes. 2. Create a volume of type distributed. 3. Set cluster.server-quorum-ratio to 90. 4. Set server-quorum-type to server. 5. Start the volume. 6. Stop glusterd on one node. 7. Stop the volume.(Should fail!) Actual results: volume stop: testvol_distributed: success Expected results: volume stop: testvol_distributed: failed: Quorum not met. Volume operation not allowed. Additional info: --- Additional comment from Atin Mukherjee on 2019-04-01 14:33:42 UTC --- This looks like a bug and should be an easy fix. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1690753 [Bug 1690753] Volume stop when quorum not met is successful -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 13:02:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 13:02:09 +0000 Subject: [Bugs] [Bug 1690753] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690753 Kshithij Iyer changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1706893 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1706893 [Bug 1706893] Volume stop when quorum not met is successful -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 13:02:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 13:02:12 +0000 Subject: [Bugs] [Bug 1706893] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706893 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 13:09:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 13:09:41 +0000 Subject: [Bugs] [Bug 1706893] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706893 Kshithij Iyer changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Regression -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 13:09:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 13:09:43 +0000 Subject: [Bugs] [Bug 1706893] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706893 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: Block proposed | |regressions at RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 13:58:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 13:58:26 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1620 from Worker Ant --- REVIEW: https://review.gluster.org/22661 (glusterd: coverity fix) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 13:58:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 13:58:50 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #36 from Worker Ant --- REVIEW: https://review.gluster.org/22550 (tests: validate volfile grammar - strings in volfile) merged (#9) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 13:59:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 13:59:13 +0000 Subject: [Bugs] [Bug 1704888] delete the snapshots and volume at the end of uss.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1704888 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-06 13:59:13 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22649 (tests: delete the snapshots and the volume after the tests) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 14:00:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 14:00:23 +0000 Subject: [Bugs] [Bug 1704888] delete the snapshots and volume at the end of uss.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1704888 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST CC| |atumball at redhat.com Resolution|NEXTRELEASE |--- Keywords| |Reopened -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 10:33:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 10:33:24 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #648 from Worker Ant --- REVIEW: https://review.gluster.org/22274 (mem-pool.{c|h}: minor changes) merged (#18) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 14:11:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 14:11:08 +0000 Subject: [Bugs] [Bug 1706893] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706893 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |vpandey at redhat.com Flags| |needinfo?(vpandey at redhat.co | |m) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 15:14:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 15:14:47 +0000 Subject: [Bugs] [Bug 1706683] Enable enable fips-mode-rchecksum for new volumes by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706683 Rahul Hinduja changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |asriram at redhat.com, | |rhinduja at redhat.com Blocks| |1696803 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 15:14:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 15:14:50 +0000 Subject: [Bugs] [Bug 1706683] Enable enable fips-mode-rchecksum for new volumes by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706683 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: Auto pm_ack+ for | |devel & qe approved BZs at | |RHGS 3.5.0 Rule Engine Rule| |665 Target Release|--- |RHGS 3.5.0 Rule Engine Rule| |666 Rule Engine Rule| |327 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 17:09:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 17:09:44 +0000 Subject: [Bugs] [Bug 1706893] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706893 --- Comment #5 from Vishal Pandey --- Will start working on it ASAP. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 17:11:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 17:11:08 +0000 Subject: [Bugs] [Bug 1706893] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706893 Vishal Pandey changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|amukherj at redhat.com |vpandey at redhat.com Flags|needinfo?(vpandey at redhat.co | |m) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 6 18:07:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 18:07:50 +0000 Subject: [Bugs] [Bug 1706842] Hard Failover with Samba and Glusterfs fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706842 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 18:25:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 18:25:27 +0000 Subject: [Bugs] [Bug 1707081] New: Self heal daemon not coming up after upgrade to glusterfs-6.0-2 (intermittently) on a brick mux setup Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707081 Bug ID: 1707081 Summary: Self heal daemon not coming up after upgrade to glusterfs-6.0-2 (intermittently) on a brick mux setup Product: GlusterFS Version: mainline Status: NEW Component: glusterd Keywords: AutomationBlocker, Regression, TestBlocker Severity: high Assignee: bugs at gluster.org Reporter: rkavunga at redhat.com CC: amukherj at redhat.com, bmekala at redhat.com, bugs at gluster.org, rhinduja at redhat.com, rhs-bugs at redhat.com, rkavunga at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, ubansal at redhat.com, vbellur at redhat.com Depends On: 1704851 Blocks: 1696807 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1704851 [Bug 1704851] Self heal daemon not coming up after upgrade to glusterfs-6.0-2 (intermittently) on a brick mux setup -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 18:26:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 18:26:02 +0000 Subject: [Bugs] [Bug 1707081] Self heal daemon not coming up after upgrade to glusterfs-6.0-2 (intermittently) on a brick mux setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707081 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Comment #0 is|1 |0 private| | Status|NEW |ASSIGNED QA Contact| |rkavunga at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 18:29:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 18:29:02 +0000 Subject: [Bugs] [Bug 1707081] Self heal daemon not coming up after upgrade to glusterfs-6.0-2 (intermittently) on a brick mux setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707081 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22667 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 6 18:29:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 06 May 2019 18:29:03 +0000 Subject: [Bugs] [Bug 1707081] Self heal daemon not coming up after upgrade to glusterfs-6.0-2 (intermittently) on a brick mux setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707081 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22667 (shd/glusterd: Serialize shd manager to prevent race condition) posted (#2) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 02:46:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 02:46:05 +0000 Subject: [Bugs] [Bug 1707195] New: VM stuck in a shutdown because of a pending fuse request Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707195 Bug ID: 1707195 Summary: VM stuck in a shutdown because of a pending fuse request Product: GlusterFS Version: 6 OS: Linux Status: NEW Component: write-behind Severity: medium Priority: medium Assignee: bugs at gluster.org Reporter: rgowdapp at redhat.com CC: bugs at gluster.org, nravinas at redhat.com, rgowdapp at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com Depends On: 1702686, 1705865 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1705865 [Bug 1705865] VM stuck in a shutdown because of a pending fuse request -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 02:48:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 02:48:00 +0000 Subject: [Bugs] [Bug 1707195] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707195 --- Comment #1 from Raghavendra G --- VM fails to shutdown, getting stuck in 'Powering down' status. This is because its 'qemu-kvm' process gets in a zombie/defunct state: more ps-Ll.txt F S UID PID PPID LWP C PRI NI ADDR SZ WCHAN TTY TIME CMD 6 Z 107 20631 1 20631 0 80 0 - 0 do_exi ? 8:45 [qemu-kvm] 3 D 107 20631 1 20635 0 80 0 - 2386845 fuse_r ? 1:12 [qemu-kvm] The customer has collected a crash dump of the affected VM and also statedumps from all the glusterfs process running in this machine when this problem is present. Thread ID 20635 is the one of interest: crash> bt 20635 PID: 20635 TASK: ffff9ed3926eb0c0 CPU: 7 COMMAND: "IO iothread1" #0 [ffff9ec8e351fa28] __schedule at ffffffff91967747 #1 [ffff9ec8e351fab0] schedule at ffffffff91967c49 #2 [ffff9ec8e351fac0] __fuse_request_send at ffffffffc09d24e5 [fuse] #3 [ffff9ec8e351fb30] fuse_request_send at ffffffffc09d26e2 [fuse] #4 [ffff9ec8e351fb40] fuse_send_write at ffffffffc09dbc76 [fuse] #5 [ffff9ec8e351fb70] fuse_direct_io at ffffffffc09dc0d6 [fuse] #6 [ffff9ec8e351fc58] __fuse_direct_write at ffffffffc09dc562 [fuse] #7 [ffff9ec8e351fca8] fuse_direct_IO at ffffffffc09dd3ca [fuse] #8 [ffff9ec8e351fd70] generic_file_direct_write at ffffffff913b8663 #9 [ffff9ec8e351fdc8] fuse_file_aio_write at ffffffffc09ddbd5 [fuse] #10 [ffff9ec8e351fe60] do_io_submit at ffffffff91497a73 #11 [ffff9ec8e351ff40] sys_io_submit at ffffffff91497f40 #12 [ffff9ec8e351ff50] tracesys at ffffffff9197505b (via system_call) RIP: 00007f9ff0758697 RSP: 00007f9db86814b8 RFLAGS: 00000246 RAX: ffffffffffffffda RBX: 0000000000000001 RCX: ffffffffffffffff RDX: 00007f9db86814d0 RSI: 0000000000000001 RDI: 00007f9ff268e000 RBP: 0000000000000080 R8: 0000000000000080 R9: 000000000000006a R10: 0000000000000078 R11: 0000000000000246 R12: 00007f9db86814c0 R13: 0000560264b9b518 R14: 0000560264b9b4f0 R15: 00007f9db8681bb0 ORIG_RAX: 00000000000000d1 CS: 0033 SS: 002b >From the core, this is the file the above process is writing to: crash> files -d 0xffff9ec8e8f9f740 DENTRY INODE SUPERBLK TYPE PATH ffff9ec8e8f9f740 ffff9ed39e705700 ffff9ee009adc000 REG /rhev/data-center/mnt/glusterSD/172.16.20.21:_vmstore2/e5dd645f-88bb-491c-9145-38fa229cbc4d/images/8e84c1ed-48ba-4b82-9882-c96e6f260bab/29bba0a1-6c7b-4358-9ef2-f8080405778d So in this case we're accessing the vmstore2 volume. This is the glusterfs process: root 4863 0.0 0.0 1909580 49316 ? S bt 4863 PID: 4863 TASK: ffff9edfa9ff9040 CPU: 11 COMMAND: "glusterfs" #0 [ffff9ed3a332fc28] __schedule at ffffffff91967747 #1 [ffff9ed3a332fcb0] schedule at ffffffff91967c49 #2 [ffff9ed3a332fcc0] futex_wait_queue_me at ffffffff9130cf76 #3 [ffff9ed3a332fd00] futex_wait at ffffffff9130dc5b #4 [ffff9ed3a332fe48] do_futex at ffffffff9130f9a6 #5 [ffff9ed3a332fed8] sys_futex at ffffffff9130fec0 #6 [ffff9ed3a332ff50] system_call_fastpath at ffffffff91974ddb RIP: 00007f6e5eeccf47 RSP: 00007ffdd311c7d0 RFLAGS: 00000246 RAX: 00000000000000ca RBX: 00007f6e59496700 RCX: ffffffffffffffff RDX: 0000000000001308 RSI: 0000000000000000 RDI: 00007f6e594969d0 RBP: 00007f6e60552780 R8: 0000000000000000 R9: 00007f6e5e6e314d R10: 0000000000000000 R11: 0000000000000246 R12: 00007f6e59496d28 R13: 0000000000000000 R14: 0000000000000006 R15: 00007ffdd311c920 ORIG_RAX: 00000000000000ca CS: 0033 SS: 002b We have a few pending frames in this process. Reviewing the corresponding statedump: grep complete=0 glusterdump.4863.dump.1556091368 -c 7 Looking for these pending frames in the statedump: ~~~ [global.callpool.stack.1] stack=0x7f6e4007c828 uid=107 gid=107 pid=20635 unique=5518502 lk-owner=bd2351a6cc7fcb8b op=WRITE type=1 cnt=6 [global.callpool.stack.1.frame.1] frame=0x7f6dec04de38 ref_count=0 translator=vmstore2-write-behind complete=0 parent=vmstore2-open-behind wind_from=default_writev_resume wind_to=(this->children->xlator)->fops->writev unwind_to=default_writev_cbk [global.callpool.stack.1.frame.2] frame=0x7f6dec0326f8 ref_count=1 translator=vmstore2-open-behind complete=0 parent=vmstore2-md-cache wind_from=mdc_writev wind_to=(this->children->xlator)->fops->writev unwind_to=mdc_writev_cbk [global.callpool.stack.1.frame.3] frame=0x7f6dec005bf8 ref_count=1 translator=vmstore2-md-cache complete=0 parent=vmstore2-io-threads wind_from=default_writev_resume wind_to=(this->children->xlator)->fops->writev unwind_to=default_writev_cbk [global.callpool.stack.1.frame.4] frame=0x7f6e400ab0f8 ref_count=1 translator=vmstore2-io-threads complete=0 parent=vmstore2 wind_from=io_stats_writev wind_to=(this->children->xlator)->fops->writev unwind_to=io_stats_writev_cbk [global.callpool.stack.1.frame.5] frame=0x7f6e4007c6c8 ref_count=1 translator=vmstore2 complete=0 parent=fuse wind_from=fuse_write_resume wind_to=FIRST_CHILD(this)->fops->writev unwind_to=fuse_writev_cbk [global.callpool.stack.1.frame.6] frame=0x7f6e4002cb98 ref_count=1 translator=fuse complete=0 ~~~ So I believe we're pending in the 'write-behind' translator. Please, I'd need some help to figure out the cause of the hang. Thank you, Natalia -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 02:48:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 02:48:45 +0000 Subject: [Bugs] [Bug 1707195] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707195 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED --- Comment #2 from Raghavendra G --- I do see a write request hung in write-behind. Details of write-request from state-dump: [xlator.performance.write-behind.wb_inode] path=/e5dd645f-88bb-491c-9145-38fa229cbc4d/images/8e84c1ed-48ba-4b82-9882-c96e6f260bab/29bba0a1-6c7b-4358-9ef2-f8080405778d inode=0x7f6e40060888 gfid=6348d15d-7b17-4993-9da9-3f588c2ad5a8 window_conf=1048576 window_current=0 transit-size=0 dontsync=0 [.WRITE] unique=5518502 refcount=1 wound=no generation-number=0 req->op_ret=131072 req->op_errno=0 sync-attempts=0 sync-in-progress=no size=131072 offset=4184756224 lied=0 append=0 fulfilled=0 go=0 I'll go through this and will try to come up with an RCA. --- Additional comment from Raghavendra G on 2019-04-29 07:21:50 UTC --- There is a race in the way O_DIRECT writes are handled. Assume two overlapping write requests w1 and w2. * w1 is issued and is in wb_inode->wip queue as the response is still pending from bricks. Also wb_request_unref in wb_do_winds is not yet invoked. list_for_each_entry_safe (req, tmp, tasks, winds) { list_del_init (&req->winds); if (req->op_ret == -1) { call_unwind_error_keep_stub (req->stub, req->op_ret, req->op_errno); } else { call_resume_keep_stub (req->stub); } wb_request_unref (req); } * w2 is issued and wb_process_queue is invoked. w2 is not picked up for winding as w1 is still in wb_inode->wip. w1 is added to todo list and wb_writev for w2 returns. * response to w1 is received and invokes wb_request_unref. Assume wb_request_unref in wb_do_winds (see point 1) is not invoked yet. Since there is one more refcount, wb_request_unref in wb_writev_cbk of w1 doesn't remove w1 from wip. * wb_process_queue is invoked as part of wb_writev_cbk of w1. But, it fails to wind w2 as w1 is still in wip. * wb_requet_unref is invoked on w1 as part of wb_do_winds. w1 is removed from all queues including w1. * After this point there is no invocation of wb_process_queue unless new request is issued from application causing w2 to be hung till the next request. This bug is similar to bz 1626780 and bz 1379655. Though the issue is similar, fixes to these to bzs won't fix the current bug and hence this bug is not a duplicate. This bug will require a new fix and I'll post a patch to gerrit shortly. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 02:50:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 02:50:49 +0000 Subject: [Bugs] [Bug 1707195] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707195 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22668 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 02:50:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 02:50:50 +0000 Subject: [Bugs] [Bug 1707195] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707195 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22668 (performance/write-behind: remove request from wip list in wb_writev_cbk) posted (#1) for review on release-6 by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 02:51:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 02:51:11 +0000 Subject: [Bugs] [Bug 1707198] New: VM stuck in a shutdown because of a pending fuse request Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707198 Bug ID: 1707198 Summary: VM stuck in a shutdown because of a pending fuse request Product: GlusterFS Version: 5 OS: Linux Status: NEW Component: write-behind Severity: medium Priority: medium Assignee: bugs at gluster.org Reporter: rgowdapp at redhat.com CC: bugs at gluster.org, nravinas at redhat.com, rgowdapp at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com Depends On: 1702686, 1705865 Blocks: 1707195 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1705865 [Bug 1705865] VM stuck in a shutdown because of a pending fuse request https://bugzilla.redhat.com/show_bug.cgi?id=1707195 [Bug 1707195] VM stuck in a shutdown because of a pending fuse request -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 02:51:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 02:51:11 +0000 Subject: [Bugs] [Bug 1707195] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707195 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1707198 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1707198 [Bug 1707198] VM stuck in a shutdown because of a pending fuse request -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 02:53:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 02:53:40 +0000 Subject: [Bugs] [Bug 1707198] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707198 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22669 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 02:53:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 02:53:41 +0000 Subject: [Bugs] [Bug 1707198] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707198 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22669 (performance/write-behind: remove request from wip list in wb_writev_cbk) posted (#1) for review on release-5 by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 02:54:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 02:54:05 +0000 Subject: [Bugs] [Bug 1707200] New: VM stuck in a shutdown because of a pending fuse request Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707200 Bug ID: 1707200 Summary: VM stuck in a shutdown because of a pending fuse request Product: GlusterFS Version: 4.1 OS: Linux Status: NEW Component: write-behind Severity: medium Priority: medium Assignee: bugs at gluster.org Reporter: rgowdapp at redhat.com CC: bugs at gluster.org, nravinas at redhat.com, rgowdapp at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com Depends On: 1702686, 1707195, 1707198, 1705865 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1705865 [Bug 1705865] VM stuck in a shutdown because of a pending fuse request https://bugzilla.redhat.com/show_bug.cgi?id=1707195 [Bug 1707195] VM stuck in a shutdown because of a pending fuse request https://bugzilla.redhat.com/show_bug.cgi?id=1707198 [Bug 1707198] VM stuck in a shutdown because of a pending fuse request -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 02:54:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 02:54:05 +0000 Subject: [Bugs] [Bug 1707195] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707195 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1707200 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1707200 [Bug 1707200] VM stuck in a shutdown because of a pending fuse request -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 02:54:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 02:54:05 +0000 Subject: [Bugs] [Bug 1707198] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707198 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1707200 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1707200 [Bug 1707200] VM stuck in a shutdown because of a pending fuse request -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 05:12:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 05:12:31 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #649 from Worker Ant --- REVIEW: https://review.gluster.org/22665 (libglusterfs: Fix compilation when --disable-mempool is used) merged (#2) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 05:17:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 05:17:07 +0000 Subject: [Bugs] [Bug 1707200] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707200 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22670 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 05:17:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 05:17:08 +0000 Subject: [Bugs] [Bug 1707200] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707200 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22670 (performance/write-behind: remove request from wip list in wb_writev_cbk) posted (#1) for review on release-4.1 by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 05:36:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 05:36:53 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1621 from Worker Ant --- REVIEW: https://review.gluster.org/22610 (afr : fix Coverity CID 1398627) merged (#7) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 05:56:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 05:56:57 +0000 Subject: [Bugs] [Bug 1707227] New: glusterfsd memory leak after enable tls/ssl Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707227 Bug ID: 1707227 Summary: glusterfsd memory leak after enable tls/ssl Product: GlusterFS Version: 4.1 Hardware: x86_64 OS: Linux Status: NEW Component: rpc Severity: high Assignee: bugs at gluster.org Reporter: zz.sh.cynthia at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: glusterfsd memory leak found Version-Release number of selected component (if applicable): 3.12.15 How reproducible: while true;do gluster v heal info;done and open another session to check the memory usage of the related glusterfsd process, the memory will keep increasing until around 370M then increase will stop Steps to Reproduce: 1.while true;do gluster v heal info;done 2.check the memory usage of the related glusterfsd process 3. Actual results: the memory will keep increasing until around 370M then increase will stop Expected results: memory stable Additional info: with memory scan tool vlagrand attached to glusterfsd process and libleak attached to glusterfsd process seems ssl_accept is suspicious, not sure it is caused by ssl_accept or glusterfs mis-use of ssl: ==16673== 198,720 bytes in 12 blocks are definitely lost in loss record 1,114 of 1,123 ==16673== at 0x4C2EB7B: malloc (vg_replace_malloc.c:299) ==16673== by 0x63E1977: CRYPTO_malloc (in /usr/lib64/libcrypto.so.1.0.2p) ==16673== by 0xA855E0C: ssl3_setup_write_buffer (in /usr/lib64/libssl.so.1.0.2p) ==16673== by 0xA855E77: ssl3_setup_buffers (in /usr/lib64/libssl.so.1.0.2p) ==16673== by 0xA8485D9: ssl3_accept (in /usr/lib64/libssl.so.1.0.2p) ==16673== by 0xA610DDF: ssl_complete_connection (socket.c:400) ==16673== by 0xA617F38: ssl_handle_server_connection_attempt (socket.c:2409) ==16673== by 0xA618420: socket_complete_connection (socket.c:2554) ==16673== by 0xA618788: socket_event_handler (socket.c:2613) ==16673== by 0x4ED6983: event_dispatch_epoll_handler (event-epoll.c:587) ==16673== by 0x4ED6C5A: event_dispatch_epoll_worker (event-epoll.c:663) ==16673== by 0x615C5D9: start_thread (in /usr/lib64/libpthread-2.27.so) ==16673== ==16673== 200,544 bytes in 12 blocks are definitely lost in loss record 1,115 of 1,123 ==16673== at 0x4C2EB7B: malloc (vg_replace_malloc.c:299) ==16673== by 0x63E1977: CRYPTO_malloc (in /usr/lib64/libcrypto.so.1.0.2p) ==16673== by 0xA855D12: ssl3_setup_read_buffer (in /usr/lib64/libssl.so.1.0.2p) ==16673== by 0xA855E68: ssl3_setup_buffers (in /usr/lib64/libssl.so.1.0.2p) ==16673== by 0xA8485D9: ssl3_accept (in /usr/lib64/libssl.so.1.0.2p) ==16673== by 0xA610DDF: ssl_complete_connection (socket.c:400) ==16673== by 0xA617F38: ssl_handle_server_connection_attempt (socket.c:2409) ==16673== by 0xA618420: socket_complete_connection (socket.c:2554) ==16673== by 0xA618788: socket_event_handler (socket.c:2613) ==16673== by 0x4ED6983: event_dispatch_epoll_handler (event-epoll.c:587) ==16673== by 0x4ED6C5A: event_dispatch_epoll_worker (event-epoll.c:663) ==16673== by 0x615C5D9: start_thread (in /usr/lib64/libpthread-2.27.so) ==16673== valgrind --leak-check=f also, with another memory leak scan tool libleak: callstack[2419] expires. count=1 size=224/224 alloc=362 free=350 /home/robot/libleak/libleak.so(malloc+0x25) [0x7f1460604065] /lib64/libcrypto.so.10(CRYPTO_malloc+0x58) [0x7f145ecd9978] /lib64/libcrypto.so.10(EVP_DigestInit_ex+0x2a9) [0x7f145ed95749] /lib64/libssl.so.10(ssl3_digest_cached_records+0x11d) [0x7f145abb6ced] /lib64/libssl.so.10(ssl3_accept+0xc8f) [0x7f145abadc4f] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(ssl_complete_connection+0x5e) [0x7f145ae00f3a] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(+0xc16d) [0x7f145ae0816d] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(+0xc68a) [0x7f145ae0868a] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(+0xc9f2) [0x7f145ae089f2] /lib64/libglusterfs.so.0(+0x9b96f) [0x7f146038596f] /lib64/libglusterfs.so.0(+0x9bc46) [0x7f1460385c46] /lib64/libpthread.so.0(+0x75da) [0x7f145f0d15da] /lib64/libc.so.6(clone+0x3f) [0x7f145e9a7eaf] callstack[2432] expires. count=1 size=104/104 alloc=362 free=0 /home/robot/libleak/libleak.so(malloc+0x25) [0x7f1460604065] /lib64/libcrypto.so.10(CRYPTO_malloc+0x58) [0x7f145ecd9978] /lib64/libcrypto.so.10(BN_MONT_CTX_new+0x17) [0x7f145ed48627] /lib64/libcrypto.so.10(BN_MONT_CTX_set_locked+0x6d) [0x7f145ed489fd] /lib64/libcrypto.so.10(+0xff4d9) [0x7f145ed6a4d9] /lib64/libcrypto.so.10(int_rsa_verify+0x1cd) [0x7f145ed6d41d] /lib64/libcrypto.so.10(RSA_verify+0x32) [0x7f145ed6d972] /lib64/libcrypto.so.10(+0x107ff5) [0x7f145ed72ff5] /lib64/libcrypto.so.10(EVP_VerifyFinal+0x211) [0x7f145ed9dd51] /lib64/libssl.so.10(ssl3_get_cert_verify+0x5bb) [0x7f145abac06b] /lib64/libssl.so.10(ssl3_accept+0x988) [0x7f145abad948] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(ssl_complete_connection+0x5e) [0x7f145ae00f3a] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(+0xc16d) [0x7f145ae0816d] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(+0xc68a) [0x7f145ae0868a] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(+0xc9f2) [0x7f145ae089f2] /lib64/libglusterfs.so.0(+0x9b96f) [0x7f146038596f] /lib64/libglusterfs.so.0(+0x9bc46) [0x7f1460385c46] /lib64/libpthread.so.0(+0x75da) [0x7f145f0d15da] /lib64/libc.so.6(clone+0x3f) [0x7f145e9a7eaf] -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 06:06:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 06:06:45 +0000 Subject: [Bugs] [Bug 1698861] Renaming a directory when 2 bricks of multiple disperse subvols are down leaves both old and new dirs on the bricks. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698861 Ashish Pandey changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED CC| |aspandey at redhat.com --- Comment #1 from Ashish Pandey --- Steps - 1 - Create 4+2 volume and mount it on /mnt/vol Volume Name: vol Type: Disperse Volume ID: 742b8e08-1f16-4bad-aa94-5e36dd10fe91 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: apandey:/home/apandey/bricks/gluster/vol-1 Brick2: apandey:/home/apandey/bricks/gluster/vol-2 Brick3: apandey:/home/apandey/bricks/gluster/vol-3 Brick4: apandey:/home/apandey/bricks/gluster/vol-4 Brick5: apandey:/home/apandey/bricks/gluster/vol-5 Brick6: apandey:/home/apandey/bricks/gluster/vol-6 Options Reconfigured: transport.address-family: inet nfs.disable: on Status of volume: vol Gluster process 2 - mkdir /mnt/vol/dir/old -p 3 -for i in {1..200}; do touch dir/old/file-$i ; done [root at apandey glusterfs]# gluster v status Status of volume: vol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick apandey:/home/apandey/bricks/gluster/ vol-1 49152 0 Y 13401 Brick apandey:/home/apandey/bricks/gluster/ vol-2 49153 0 Y 11682 Brick apandey:/home/apandey/bricks/gluster/ vol-3 49154 0 Y 11702 Brick apandey:/home/apandey/bricks/gluster/ vol-4 49155 0 Y 11722 Brick apandey:/home/apandey/bricks/gluster/ vol-5 49156 0 Y 11742 Brick apandey:/home/apandey/bricks/gluster/ vol-6 49157 0 Y 11762 Self-heal Daemon on localhost N/A N/A Y 13427 Task Status of Volume vol ------------------------------------------------------------------------------ There are no active volume tasks 4 - Kill brick 1 [root at apandey glusterfs]# kill 13401 [root at apandey glusterfs]# gluster v status Status of volume: vol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick apandey:/home/apandey/bricks/gluster/ vol-1 N/A N/A N N/A Brick apandey:/home/apandey/bricks/gluster/ vol-2 49153 0 Y 11682 Brick apandey:/home/apandey/bricks/gluster/ vol-3 49154 0 Y 11702 Brick apandey:/home/apandey/bricks/gluster/ vol-4 49155 0 Y 11722 Brick apandey:/home/apandey/bricks/gluster/ vol-5 49156 0 Y 11742 Brick apandey:/home/apandey/bricks/gluster/ vol-6 49157 0 Y 11762 Self-heal Daemon on localhost N/A N/A Y 13427 Task Status of Volume vol ------------------------------------------------------------------------------ There are no active volume tasks 5 - mv dir/old/ dir/new 6 - [root at apandey vol]# ll dir/new | wc -l 201 7 - gluster v start vol force 8 -ll dir/new | wc -l 1 9 - ll dir/old | wc -l 1 10- [root at apandey glusterfs]# getfattr -m. -d -e hex /home/apandey/bricks/gluster/vol-*/dir/old getfattr: Removing leading '/' from absolute path names # file: home/apandey/bricks/gluster/vol-1/dir/old trusted.ec.dirty=0x00000000000000010000000000000001 trusted.ec.version=0x00000000000000c800000000000000c8 trusted.gfid=0x098b8bf8e0ba406283f26334a0b83e23 trusted.glusterfs.dht=0x000000000000000000000000ffffffff # file: home/apandey/bricks/gluster/vol-2/dir/old trusted.ec.dirty=0x00000000000000000000000000000000 trusted.gfid=0x098b8bf8e0ba406283f26334a0b83e23 # file: home/apandey/bricks/gluster/vol-3/dir/old trusted.ec.dirty=0x00000000000000000000000000000000 trusted.gfid=0x098b8bf8e0ba406283f26334a0b83e23 # file: home/apandey/bricks/gluster/vol-4/dir/old trusted.ec.dirty=0x00000000000000000000000000000000 trusted.gfid=0x098b8bf8e0ba406283f26334a0b83e23 # file: home/apandey/bricks/gluster/vol-5/dir/old trusted.ec.dirty=0x00000000000000000000000000000000 trusted.gfid=0x098b8bf8e0ba406283f26334a0b83e23 # file: home/apandey/bricks/gluster/vol-6/dir/old trusted.ec.dirty=0x00000000000000000000000000000000 trusted.gfid=0x098b8bf8e0ba406283f26334a0b83e23 11- [root at apandey glusterfs]# getfattr -m. -d -e hex /home/apandey/bricks/gluster/vol-*/dir/new getfattr: Removing leading '/' from absolute path names # file: home/apandey/bricks/gluster/vol-1/dir/new trusted.gfid=0x098b8bf8e0ba406283f26334a0b83e23 # file: home/apandey/bricks/gluster/vol-2/dir/new trusted.ec.version=0x00000000000000c800000000000000c8 trusted.gfid=0x098b8bf8e0ba406283f26334a0b83e23 trusted.glusterfs.dht=0x000000000000000000000000ffffffff # file: home/apandey/bricks/gluster/vol-3/dir/new trusted.ec.version=0x00000000000000c800000000000000c8 trusted.gfid=0x098b8bf8e0ba406283f26334a0b83e23 trusted.glusterfs.dht=0x000000000000000000000000ffffffff # file: home/apandey/bricks/gluster/vol-4/dir/new trusted.ec.version=0x00000000000000c800000000000000c8 trusted.gfid=0x098b8bf8e0ba406283f26334a0b83e23 trusted.glusterfs.dht=0x000000000000000000000000ffffffff # file: home/apandey/bricks/gluster/vol-5/dir/new trusted.ec.version=0x00000000000000c800000000000000c8 trusted.gfid=0x098b8bf8e0ba406283f26334a0b83e23 trusted.glusterfs.dht=0x000000000000000000000000ffffffff # file: home/apandey/bricks/gluster/vol-6/dir/new trusted.ec.version=0x00000000000000c800000000000000c8 trusted.gfid=0x098b8bf8e0ba406283f26334a0b83e23 trusted.glusterfs.dht=0x000000000000000000000000ffffffff 12 - [root at apandey glusterfs]# gluster v heal vol info Brick apandey:/home/apandey/bricks/gluster/vol-1 Status: Connected Number of entries: 0 Brick apandey:/home/apandey/bricks/gluster/vol-2 Status: Connected Number of entries: 0 Brick apandey:/home/apandey/bricks/gluster/vol-3 Status: Connected Number of entries: 0 Brick apandey:/home/apandey/bricks/gluster/vol-4 Status: Connected Number of entries: 0 Brick apandey:/home/apandey/bricks/gluster/vol-5 Status: Connected Number of entries: 0 Brick apandey:/home/apandey/bricks/gluster/vol-6 Status: Connected Number of entries: 0 13 - As we can see that the trusted.glusterfs.dht=0x000000000000000000000000ffffffff is missing for "old" directory on all the 5 bricks, I set this xattr manually. setfattr -n trusted.glusterfs.dht -v 0x000000000000000000000000ffffffff /home/apandey/bricks/gluster/vol-{2..6}/dir/old 14 - I copied the data from new dir to old dir on respective bricks - 15 - for i in {2..6} ; do yes | cp -rf /home/apandey/bricks/gluster/vol-$i/dir/new/* /home/apandey/bricks/gluster/vol-$i/dir/old/; done 16 - After this files were visible on both the old and new dir [root at apandey vol]# ll dir/new | wc -l 201 [root at apandey vol]# ll dir/old | wc -l 201 [root at apandey vol]# 17 - Although this will have both the directories, if we have all the data back and all the bricks are UP, we can safely move the data in new directory. This is working for the issue which we created using our set of steps. I am not sure if this case is exactly similar to what user is experiencing or not. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 08:25:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 08:25:44 +0000 Subject: [Bugs] [Bug 1679401] Geo-rep setup creates an incorrectly formatted authorized_keys file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679401 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22673 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 08:25:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 08:25:45 +0000 Subject: [Bugs] [Bug 1679401] Geo-rep setup creates an incorrectly formatted authorized_keys file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679401 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22673 (geo-rep: fix incorrectly formatted authorized_keys) posted (#1) for review on master by Sunny Kumar -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 09:24:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 09:24:31 +0000 Subject: [Bugs] [Bug 1654753] A distributed-disperse volume crashes when a symbolic link is renamed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654753 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22666 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 09:24:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 09:24:32 +0000 Subject: [Bugs] [Bug 1654753] A distributed-disperse volume crashes when a symbolic link is renamed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654753 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22666 (dht: use separate inode for linkto file create and setattr) posted (#2) for review on master by Susant Palai -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 09:28:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 09:28:26 +0000 Subject: [Bugs] [Bug 1702316] Cannot upgrade 5.x volume to 6.1 because of unused 'crypt' and 'bd' xlators In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702316 robdewit changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Github | |gluster/glusterfs/issues/66 | |5 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 10:11:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 10:11:28 +0000 Subject: [Bugs] [Bug 1706842] Hard Failover with Samba and Glusterfs fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706842 --- Comment #2 from david.spisla at iternity.com --- Additional information: In the section "Description of problem" above there are shown log entries from glusterd while failover happens. These logs are from 2019-04-16. But the backtrace was created on 2019-04-30 and the attached logs of the glusterfs-plugin from all nodes contains information from 2019-04-30. Don't get irritated! The messages in glusterd are reproducible so one can find them also in 2019-04-30. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 10:14:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 10:14:19 +0000 Subject: [Bugs] [Bug 1706842] Hard Failover with Samba and Glusterfs fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706842 --- Comment #3 from david.spisla at iternity.com --- Created attachment 1565074 --> https://bugzilla.redhat.com/attachment.cgi?id=1565074&action=edit Logfiles from all nodes of glusterfs-plugin (SMB) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 12:00:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 12:00:43 +0000 Subject: [Bugs] [Bug 1703343] Bricks fail to come online after node reboot on a scaled setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703343 Raghavendra Talur changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1638192 Depends On|1638192 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1638192 [Bug 1638192] Bricks fail to come online after node reboot on a scaled setup -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 12:01:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 12:01:18 +0000 Subject: [Bugs] [Bug 1703343] Bricks fail to come online after node reboot on a scaled setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703343 Raghavendra Talur changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1637968 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1637968 [Bug 1637968] [RHGS] [Glusterd] Bricks fail to come online after node reboot on a scaled setup -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 12:22:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 12:22:49 +0000 Subject: [Bugs] [Bug 1706603] Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706603 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 12:25:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 12:25:27 +0000 Subject: [Bugs] [Bug 1706603] Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706603 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22674 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 12:25:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 12:25:28 +0000 Subject: [Bugs] [Bug 1706603] Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706603 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22674 (tests: Test openfd heal doesn't truncate files) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 12:39:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 12:39:33 +0000 Subject: [Bugs] [Bug 1707393] New: Refactor dht lookup code Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707393 Bug ID: 1707393 Summary: Refactor dht lookup code Product: GlusterFS Version: 6 Status: NEW Component: distribute Keywords: Reopened Severity: medium Priority: high Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org Depends On: 1590385 Blocks: 1703897 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1590385 +++ Description of problem: Refactor the dht lookup code in order to make it easier to maintain. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2018-06-12 14:24:47 UTC --- REVIEW: https://review.gluster.org/20246 (cluster/dht: refactor dht_lookup) posted (#1) for review on master by N Balachandran --- Additional comment from Worker Ant on 2018-06-14 07:09:47 UTC --- REVIEW: https://review.gluster.org/20267 (cluster/dht: Minor code cleanup) posted (#1) for review on master by N Balachandran --- Additional comment from Worker Ant on 2018-06-20 02:40:18 UTC --- COMMIT: https://review.gluster.org/20267 committed in master by "N Balachandran" with a commit message- cluster/dht: Minor code cleanup Removed extra variable. Change-Id: If43c47f6630454aeadab357a36d061ec0b53cdb5 updates: bz#1590385 Signed-off-by: N Balachandran --- Additional comment from Worker Ant on 2018-06-21 05:36:13 UTC --- COMMIT: https://review.gluster.org/20246 committed in master by "Amar Tumballi" with a commit message- cluster/dht: refactor dht_lookup The dht lookup code is getting difficult to maintain due to its size. Refactoring the code will make it easier to modify it in future. Change-Id: Ic7cb5bf4f018504dfaa7f0d48cf42ab0aa34abdd updates: bz#1590385 Signed-off-by: N Balachandran --- Additional comment from Worker Ant on 2018-08-02 16:20:48 UTC --- REVIEW: https://review.gluster.org/20622 (cluster/dht: refactor dht_lookup_cbk) posted (#1) for review on master by N Balachandran --- Additional comment from Shyamsundar on 2018-10-23 15:11:13 UTC --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/ --- Additional comment from Nithya Balachandran on 2018-10-29 02:58:27 UTC --- Reopening as this is an umbrella BZ for many more changes to the rebalance process. --- Additional comment from Worker Ant on 2018-12-06 13:57:38 UTC --- REVIEW: https://review.gluster.org/21816 (cluster/dht: refactor dht_lookup_cbk) posted (#1) for review on master by N Balachandran --- Additional comment from Worker Ant on 2018-12-26 12:41:37 UTC --- REVIEW: https://review.gluster.org/21816 (cluster/dht: refactor dht_lookup_cbk) posted (#7) for review on master by N Balachandran --- Additional comment from Nithya Balachandran on 2018-12-26 12:56:32 UTC --- Reopening this as there will be more changes. --- Additional comment from Worker Ant on 2019-03-25 10:29:50 UTC --- REVIEW: https://review.gluster.org/22407 (cluster/dht: refactor dht lookup functions) posted (#1) for review on master by N Balachandran --- Additional comment from Shyamsundar on 2019-03-25 16:30:27 UTC --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ --- Additional comment from Worker Ant on 2019-04-06 01:41:34 UTC --- REVIEW: https://review.gluster.org/22407 (cluster/dht: refactor dht lookup functions) merged (#10) on master by N Balachandran --- Additional comment from Worker Ant on 2019-04-10 09:03:15 UTC --- REVIEW: https://review.gluster.org/22542 (cluster/dht: Refactor dht lookup functions) posted (#1) for review on master by N Balachandran --- Additional comment from Worker Ant on 2019-04-25 04:12:37 UTC --- REVIEW: https://review.gluster.org/22542 (cluster/dht: Refactor dht lookup functions) merged (#3) on master by Amar Tumballi --- Additional comment from Nithya Balachandran on 2019-04-29 03:21:31 UTC --- Marking this Modified as I am done with the changes for now. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1590385 [Bug 1590385] Refactor dht lookup code -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 12:39:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 12:39:33 +0000 Subject: [Bugs] [Bug 1590385] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1590385 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1707393 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1707393 [Bug 1707393] Refactor dht lookup code -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 12:51:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 12:51:49 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22675 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 12:51:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 12:51:50 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #650 from Worker Ant --- REVIEW: https://review.gluster.org/22675 ([WIP]glusterd-utils.c: skip checksum when possible.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 12:53:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 12:53:02 +0000 Subject: [Bugs] [Bug 1707393] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707393 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22676 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 12:53:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 12:53:03 +0000 Subject: [Bugs] [Bug 1707393] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707393 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22676 (cluster/dht: refactor dht lookup functions) posted (#1) for review on release-6 by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 13:40:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 13:40:15 +0000 Subject: [Bugs] [Bug 902955] [enhancement] Provide a clear and easy way to integrate 3rd party translators In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=902955 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed|2017-03-08 10:49:06 |2019-05-07 13:40:15 --- Comment #7 from Amar Tumballi --- Joe, while this has been a valid request, we couldn't get to implement this with GD1 (and later tried it with GD2 as template based volgen). With current scope of things, we are not considering it at the moment. Closing it as DEFERRED, so we can revisit at these after couple of more releases, depending on the capacity. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 13:48:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 13:48:08 +0000 Subject: [Bugs] [Bug 1032382] autogen.sh warnings with automake-1.14 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1032382 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 6306 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 13:48:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 13:48:10 +0000 Subject: [Bugs] [Bug 1032382] autogen.sh warnings with automake-1.14 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1032382 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/6306 (Set subdir-objects automake option in configure.ac) posted (#2) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 13:49:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 13:49:43 +0000 Subject: [Bugs] [Bug 1037511] Operation not permitted occurred during setattr of In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1037511 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-07 13:49:43 --- Comment #11 from Amar Tumballi --- Not heard of this issue in sometime. Regret that it was open for a long time, and apologize for the same. But we are not 'concentrating' on Quota feature at the moment, and hence marking bug as DEFERRED. Will reopen if we get cycles to look into after couple of releases. Please raise the concern in mailing list if this is concerning. Meantime, we request to upgrade to glusterfs-6.x releases to utilize some stability improvements over the years. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 13:51:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 13:51:06 +0000 Subject: [Bugs] [Bug 1065634] Enabling compression and encryption translators on the same volume causes data corruption In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1065634 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |WONTFIX Last Closed| |2019-05-07 13:51:06 --- Comment #7 from Amar Tumballi --- The feature is now deprecated from glusterfs-6.0 release. We will not be taking any work on this area for now. If this is a critical feature for you, please feel free to raise the issue in github or mailing list. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 13:57:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 13:57:54 +0000 Subject: [Bugs] [Bug 1131447] [Dist-geo-rep] : Session folders does not sync after a peer probe to new node. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1131447 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-07 13:57:54 --- Comment #4 from Amar Tumballi --- Have not heard about this in a long time. Will be closing it as WORKSFORME. If anyone finds the issue feel free to reopen. I recommend upgrading to glusterfs-6.x releases or above for trying out these things. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 13:59:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 13:59:33 +0000 Subject: [Bugs] [Bug 1155181] Lots of compilation warnings on OSX. We should probably fix them. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1155181 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |EasyFix Priority|medium |low Severity|medium |low --- Comment #23 from Amar Tumballi --- Good for someone new to GlusterFS to pick this and fix it. Keeping this open. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 14:00:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 14:00:46 +0000 Subject: [Bugs] [Bug 1158051] [USS]: files/directories with the name of entry-point directory present in the snapshots cannot be accessed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1158051 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |WONTFIX Last Closed| |2019-05-07 14:00:46 --- Comment #1 from Amar Tumballi --- While this is a valid case, we are not interested to fix it, and rather call it as a intended behavior. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 14:00:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 14:00:47 +0000 Subject: [Bugs] [Bug 1160678] [USS]: In Fuse files/directories with the name of entry-point directory present in the snapshots cannot be accessed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1160678 Bug 1160678 depends on bug 1158051, which changed state. Bug 1158051 Summary: [USS]: files/directories with the name of entry-point directory present in the snapshots cannot be accessed https://bugzilla.redhat.com/show_bug.cgi?id=1158051 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WONTFIX -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 14:22:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 14:22:29 +0000 Subject: [Bugs] [Bug 1158130] Not possible to disable fopen-keeo-cache when mounting In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1158130 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low CC| |atumball at redhat.com Assignee|vbellur at redhat.com |atumball at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 14:26:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 14:26:29 +0000 Subject: [Bugs] [Bug 1158130] Not possible to disable fopen-keeo-cache when mounting In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1158130 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22678 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 14:26:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 14:26:31 +0000 Subject: [Bugs] [Bug 1158130] Not possible to disable fopen-keeo-cache when mounting In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1158130 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22678 (mount.glusterfs: make fcache-keep-open option take a value) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 14:32:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 14:32:12 +0000 Subject: [Bugs] [Bug 1179179] When an unsupported AUTH_* scheme is used, the RPC-Reply should contain MSG_DENIED/AUTH_ERROR/AUTH_FAILED In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1179179 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-07 14:32:12 --- Comment #5 from Amar Tumballi --- With the focus of the project not containing gNFS related improvements, marking it as DEFERRED for now. We will look into this after couple of releases to take stock of things. Please send an email to mailing list if you find this critical. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 14:35:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 14:35:17 +0000 Subject: [Bugs] [Bug 1183054] rpmlint throws couple of errors for RPM spec file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1183054 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|high |medium CC| |atumball at redhat.com, | |kkeithle at redhat.com, | |ndevos at redhat.com, | |rkothiya at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 14:40:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 14:40:37 +0000 Subject: [Bugs] [Bug 1187347] RPC ping does not retransmit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1187347 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-07 14:40:37 --- Comment #2 from Amar Tumballi --- Hi Scott, thanks for your detailed report. We regret keeping it open for such a long time. Currently we recommend you to upgrade to glusterfs-6.x and see if the behavior is fine for you. With the current scope of things, we can't pick this bug to work on (as there are options for having backup-volfile-server etc). Will keep this bug under DEFERRED status, we will revisit this after couple of releases. We also are looking at implementing a different n/w layer based solution (ref: https://github.com/gluster/glusterfs/issues/391 & https://github.com/gluster/glusterfs/issues/505). Feel free to follow those issues to keep track of the progress. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 14:48:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 14:48:50 +0000 Subject: [Bugs] [Bug 1196028] libgfapi: glfs_init() hangs on pthread_cond_wait() when user is non-root In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1196028 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-07 14:48:50 --- Comment #4 from Amar Tumballi --- Tried glfsxmp.c as a non-user, and things are working fine with glusterfs-6.x -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 14:59:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 14:59:06 +0000 Subject: [Bugs] [Bug 1214671] Diagnosis and recommended fix to be added in glusterd-messages.h In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1214671 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-07 14:59:06 --- Comment #4 from Amar Tumballi --- While this is a valid ask, we can't pick this up till we look into structured logging improvements, which we would pickup before this. So keeping the effort as DEFERRED with current scope, please raise issues through email (Mailinglist), if there are concerns. We will revisit the issue after couple of releases to take a stand. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 15:14:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 15:14:06 +0000 Subject: [Bugs] [Bug 1215017] gf_msg not giving output to STDOUT. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1215017 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WONTFIX Last Closed| |2019-05-07 15:14:06 --- Comment #3 from Amar Tumballi --- Not planning to change gf_log() to gf_msg() before daemonizing the process. Hence marking as WONTFIX. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 15:22:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 15:22:27 +0000 Subject: [Bugs] [Bug 1215022] Populate message IDs with recommended action. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1215022 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Flags|needinfo?(hchiramm at redhat.c | |om) | Last Closed| |2019-05-07 15:22:27 --- Comment #4 from Amar Tumballi --- > So should we open a more specific issue and close this one? I am inclined towards closing this bug, and open new bugs to track specific efforts. Closing with CURRENTRELEASE, and will open any particular bugs per components so we can track them to future releases. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 15:25:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 15:25:13 +0000 Subject: [Bugs] [Bug 1215129] After adding/removing the bricks to the volume, bitrot is crawling bricks of other bitrot enabled volumes. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1215129 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Assignee|bugs at gluster.org |rabhat at redhat.com Last Closed| |2019-05-07 15:25:13 --- Comment #8 from Amar Tumballi --- This is not seen in latest releases (glusterfs-6.x). Please reopen if seen again. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Tue May 7 15:26:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 15:26:54 +0000 Subject: [Bugs] [Bug 1217372] Disperse volume: NFS client mount point hung after the bricks came back up In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1217372 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-07 15:26:54 --- Comment #3 from Amar Tumballi --- As of glusterfs-6.x have not heard of this issue. Considering we haven't run a test to validate this, marking it as DEFERRED for now, and revisiting this after couple of releases. If you find the issue is happening still with glusterfs-6.x releases, feel free to re-open it. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 21:05:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 21:05:17 +0000 Subject: [Bugs] [Bug 1221980] bitd log grows rapidly if brick goes down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1221980 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|NEW |CLOSED CC| |atumball at redhat.com Assignee|bugs at gluster.org |rabhat at redhat.com Resolution|--- |WORKSFORME Fixed In Version| |glusterfs-6.x Severity|unspecified |low Last Closed| |2019-05-07 21:05:17 --- Comment #4 from Amar Tumballi --- Not seeing with latest glusterfs-6.x releases. Please reopen if seen. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Tue May 7 21:06:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 21:06:13 +0000 Subject: [Bugs] [Bug 1231171] [RFE]- How to find total number of glusterfs client mounts? In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1231171 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed|2018-10-07 13:32:54 |2019-05-07 21:06:13 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 21:10:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 21:10:36 +0000 Subject: [Bugs] [Bug 1246024] gluster commands space in brick path fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1246024 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-07 21:10:36 --- Comment #2 from Amar Tumballi --- This remains as a bug, but fixing this currently is not a priority among other things. It would become a major work, as everywhere we take brick as argument, we need to handle space. We will add this in documentation, and mark this bug as DEFERRED, so we can revisit this after couple of releases. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 7 21:14:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 21:14:12 +0000 Subject: [Bugs] [Bug 1251614] gf_defrag_fix_layout recursively fails, distracting from the root cause In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1251614 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-07 21:14:12 --- Comment #1 from Amar Tumballi --- With introduction of commit-hash and other things in rebalance, this looks more fool proof, and didn't see any issues in latest codebase. Will mark it as WORKSFORME (with glusterfs-6.x) release. If the issue persists, will take it up in one of the future releases. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 7 21:23:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 07 May 2019 21:23:04 +0000 Subject: [Bugs] [Bug 1255582] Add the ability to force an unconditional graph reload In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1255582 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |WONTFIX Last Closed| |2019-05-07 21:23:04 --- Comment #1 from Amar Tumballi --- Joe, while this RFE makes sense when there are those rare issues, considering we have the work around to force a re-load by changing an option (like you mentioned above), we are deprioritizing this feature request (well, I understand this is sitting here without an update for last 4yrs). We will mark this as WONTFIX for now. Please feel free to reopen, if you find it critical. For those who are reading this bug, to achieve the same, do a change in volfile (or gluster volume set $volname some-option some-value), and that should take care of the issue. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 03:01:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 03:01:28 +0000 Subject: [Bugs] [Bug 1707656] New: pod cannot mount a gluster volume - failed to fetch volume file Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707656 Bug ID: 1707656 Summary: pod cannot mount a gluster volume - failed to fetch volume file Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: srakonde at redhat.com Target Milestone: --- Group: private Classification: Community This bug was initially created as a copy of Bug #1705888 Description of problem: Gluster version installed on the nodes: glusterfs-libs-3.12.2-47.el7rhgs.x86_64 glusterfs-3.12.2-47.el7rhgs.x86_64 glusterfs-client-xlators-3.12.2-47.el7rhgs.x86_64 glusterfs-server-3.12.2-47.el7rhgs.x86_64 glusterfs-api-3.12.2-47.el7rhgs.x86_64 glusterfs-cli-3.12.2-47.el7rhgs.x86_64 glusterfs-fuse-3.12.2-47.el7rhgs.x86_64 python2-gluster-3.12.2-47.el7rhgs.x86_64 glusterfs-geo-replication-3.12.2-47.el7rhgs.x86_64 gluster-block-0.2.1-31.el7rhgs.x86_64 AMQ pod " "broker-amq-1_gp-amq-cluster" cannot start properly because it cannot mount volume vol_c03b4bf4a04adb7ce011d597d1f95706 >From the pod logs: Warning Failed Mount MountVolume.SetUp failed for volume "pvc-be25d142-545f-11e9-a0f4-566f86f30040" : mount failed: mount failed: exit status 1 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/723b7932-6a7b-11e9-aed6-566f86f30041/volumes/kubernetes.io~glusterfs/pvc-be25d142-545f-11e9-a0f4-566f86f30040 --scope -- mount -t glusterfs -o log-level=ERROR,log-file=/var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/pvc-be25d142-545f-11e9-a0f4-566f86f30040/broker-amq-1-glusterfs.log,backup-volfile-servers=,auto_unmount :vol_c03b4bf4a04adb7ce011d597d1f95706 /var/lib/origin/openshift.local.volumes/pods/723b7932-6a7b-11e9-aed6-566f86f30041/volumes/kubernetes.io~glusterfs/pvc-be25d142-545f-11e9-a0f4-566f86f30040 Output: Running scope as unit run-69719.scope. Mount failed. Please check the log file for more details. The following error information was pulled from the glusterfs log to help diagnose this issue: [2019-04-29 12:36:02.371455] E [glusterfsd-mgmt.c:2073:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_c03b4bf4a04adb7ce011d597d1f95706) [2019-04-29 12:36:34.568768] E [glusterfsd-mgmt.c:2073:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_c03b4bf4a04adb7ce011d597d1f95706) So the client cannot access the associated volume file for this volume. -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 03:06:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 03:06:07 +0000 Subject: [Bugs] [Bug 1707658] New: AMQ pod cannot mount a gluster volume - failed to fetch volume file Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707658 Bug ID: 1707658 Summary: AMQ pod cannot mount a gluster volume - failed to fetch volume file Product: GlusterFS Version: mainline OS: Linux Status: NEW Component: core Severity: medium Priority: medium Assignee: bugs at gluster.org Reporter: srakonde at redhat.com CC: amukherj at redhat.com, atumball at redhat.com, bmekala at redhat.com, nchilaka at redhat.com, nravinas at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, srakonde at redhat.com, vbellur at redhat.com Depends On: 1705888 Target Milestone: --- Group: private Classification: Community Description of problem: Gluster version installed on the nodes: glusterfs-libs-3.12.2-47.el7rhgs.x86_64 glusterfs-3.12.2-47.el7rhgs.x86_64 glusterfs-client-xlators-3.12.2-47.el7rhgs.x86_64 glusterfs-server-3.12.2-47.el7rhgs.x86_64 glusterfs-api-3.12.2-47.el7rhgs.x86_64 glusterfs-cli-3.12.2-47.el7rhgs.x86_64 glusterfs-fuse-3.12.2-47.el7rhgs.x86_64 python2-gluster-3.12.2-47.el7rhgs.x86_64 glusterfs-geo-replication-3.12.2-47.el7rhgs.x86_64 gluster-block-0.2.1-31.el7rhgs.x86_64 AMQ pod " "broker-amq-1_gp-amq-cluster" cannot start properly because it cannot mount volume vol_c03b4bf4a04adb7ce011d597d1f95706 >From the pod logs: Warning Failed Mount MountVolume.SetUp failed for volume "pvc-be25d142-545f-11e9-a0f4-566f86f30040" : mount failed: mount failed: exit status 1 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/723b7932-6a7b-11e9-aed6-566f86f30041/volumes/kubernetes.io~glusterfs/pvc-be25d142-545f-11e9-a0f4-566f86f30040 --scope -- mount -t glusterfs -o log-level=ERROR,log-file=/var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/pvc-be25d142-545f-11e9-a0f4-566f86f30040/broker-amq-1-glusterfs.log,backup-volfile-servers=,auto_unmount ip:vol_c03b4bf4a04adb7ce011d597d1f95706 /var/lib/origin/openshift.local.volumes/pods/723b7932-6a7b-11e9-aed6-566f86f30041/volumes/kubernetes.io~glusterfs/pvc-be25d142-545f-11e9-a0f4-566f86f30040 Output: Running scope as unit run-69719.scope. Mount failed. Please check the log file for more details. The following error information was pulled from the glusterfs log to help diagnose this issue: [2019-04-29 12:36:02.371455] E [glusterfsd-mgmt.c:2073:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_c03b4bf4a04adb7ce011d597d1f95706) [2019-04-29 12:36:34.568768] E [glusterfsd-mgmt.c:2073:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_c03b4bf4a04adb7ce011d597d1f95706) So the client cannot access the associated volume file for this volume. -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 03:06:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 03:06:43 +0000 Subject: [Bugs] [Bug 1707656] pod cannot mount a gluster volume - failed to fetch volume file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707656 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DUPLICATE Last Closed| |2019-05-08 03:06:43 --- Comment #1 from Sanju --- *** This bug has been marked as a duplicate of bug 1707658 *** -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 03:06:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 03:06:43 +0000 Subject: [Bugs] [Bug 1707658] AMQ pod cannot mount a gluster volume - failed to fetch volume file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707658 --- Comment #1 from Sanju --- *** Bug 1707656 has been marked as a duplicate of this bug. *** -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 03:28:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 03:28:18 +0000 Subject: [Bugs] [Bug 1251614] gf_defrag_fix_layout recursively fails, distracting from the root cause In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1251614 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |ASSIGNED Resolution|WORKSFORME |--- Keywords| |Reopened --- Comment #2 from Nithya Balachandran --- Reopening this as this still exists. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 03:44:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 03:44:02 +0000 Subject: [Bugs] [Bug 1702299] Custom xattrs are not healed on newly added brick In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702299 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22520 (dht: Custom xattrs are not healed in case of add-brick) merged (#5) on master by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 03:47:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 03:47:53 +0000 Subject: [Bugs] [Bug 1707658] AMQ pod cannot mount a gluster volume - failed to fetch volume file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707658 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WORKSFORME Last Closed| |2019-05-08 03:47:53 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 03:49:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 03:49:36 +0000 Subject: [Bugs] [Bug 1707671] New: Cronjob of feeding gluster blogs from different account into planet gluster isn't working Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707671 Bug ID: 1707671 Summary: Cronjob of feeding gluster blogs from different account into planet gluster isn't working Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: As mentioned in the title. For example https://github.com/gluster/planet-gluster/blob/master/data/feeds.yml has feed: https://atinmu.wordpress.com/feed/ configured however I don't see my latest blog https://atinmu.wordpress.com/2019/04/03/glusterd-volume-scalability-improvements-in-glusterfs-7/ in planet gluster. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 04:30:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 04:30:17 +0000 Subject: [Bugs] [Bug 1702316] Cannot upgrade 5.x volume to 6.1 because of unused 'crypt' and 'bd' xlators In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702316 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(rob.dewit at coosto. | |com) --- Comment #2 from Atin Mukherjee --- I don't think the upgrade failure or the geo-replication session issue is due to the missing xlators what you highlighted in the report. If you notice the following log snippet, the cleanup_and_exit which is a shutdown trigger of glusterd happened much later than the logs which complaint about the missing xlators and I can confirm that they are benign. [2019-04-23 12:38:25.514866] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so: cannot open shared object file: No such file or directory [2019-04-23 12:38:25.522473] E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/6.1/rpc-transport/socket.so: undefined symbol: xlator_api [2019-04-23 12:38:25.555952] W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/6.1/xlator/storage/bd.so: cannot open shared object file: No such file or directory The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/6.1/xlator/encryption/crypt.so: cannot open shared object file: No such file or directory" repeated 2 times between [2019-04-23 12:38:25.514866] and [2019-04-23 12:38:25.514931] The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api) missing: /usr/lib64/glusterfs/6.1/rpc-transport/socket.so: undefined symbol: xlator_api" repeated 7 times between [2019-04-23 12:38:25.522473] and [2019-04-23 12:38:25.522545] ################################# There's a gap of ~14 minutes here ################################################### [2019-04-23 12:52:00.569988] W [glusterfsd.c:1570:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7504) [0x7fb0f1310504] -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xd5) [0x409f45] -->/usr/sbin/glusterd(cleanup_and_exit+0x57) [0x409db7] ) 0-: received signum (15), shutting down You'd need to provide us the brick logs along with glusterd logs, gluster volume status and gluster get-state output from the node where you see this happening. Related to geo-rep failures, I'd suggest you to file a different bug once this stabilises. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 04:32:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 04:32:42 +0000 Subject: [Bugs] [Bug 1703007] The telnet or something would cause high memory usage for glusterd & glusterfsd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703007 --- Comment #2 from Atin Mukherjee --- Ping? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 05:01:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 05:01:23 +0000 Subject: [Bugs] [Bug 1706683] Enable enable fips-mode-rchecksum for new volumes by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706683 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(ravishankar at redha | |t.com) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 05:18:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 05:18:44 +0000 Subject: [Bugs] [Bug 1706683] Enable enable fips-mode-rchecksum for new volumes by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706683 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 05:34:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 05:34:05 +0000 Subject: [Bugs] [Bug 1707686] New: geo-rep: Always uses rsync even with use_tarssh set to true Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707686 Bug ID: 1707686 Summary: geo-rep: Always uses rsync even with use_tarssh set to true Product: GlusterFS Version: mainline Status: NEW Component: geo-replication Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: It always uses rsync to sync data even though use_tarssh is set to true. Version-Release number of selected component (if applicable): mainilne How reproducible: Always Steps to Reproduce: 1. Setup geo-rep between two gluster volumes and start it 2. Set use_tarssh to true 3. Write a huge file on master 4. ps -ef | egrep "tar|rsync" while the big file is syncing to slave. It show rsync process instead of tar over ssh Actual results: use_tarssh has not effect on sync-engine. It's always using rsync. Expected results: use_tarssh should use tarssh and not rsync Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 05:34:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 05:34:19 +0000 Subject: [Bugs] [Bug 1707686] geo-rep: Always uses rsync even with use_tarssh set to true In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707686 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 06:45:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 06:45:33 +0000 Subject: [Bugs] [Bug 1695099] The number of glusterfs processes keeps increasing, using all available resources In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695099 Christian Ihle changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DUPLICATE Flags|needinfo?(christian.ihle at dr | |ift.oslo.kommune.no) | Last Closed| |2019-05-08 06:45:33 --- Comment #6 from Christian Ihle --- I have tested 5.6 and have so far been unable to reproduce the problem. Looks like the problem is fixed. *** This bug has been marked as a duplicate of bug 1696147 *** -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 06:45:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 06:45:33 +0000 Subject: [Bugs] [Bug 1696147] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696147 Christian Ihle changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |christian.ihle at drift.oslo.k | |ommune.no --- Comment #4 from Christian Ihle --- *** Bug 1695099 has been marked as a duplicate of this bug. *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 06:47:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 06:47:03 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #651 from Worker Ant --- REVIEW: https://review.gluster.org/22642 (glusterd/store: store all key-values in one shot) merged (#21) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 06:48:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 06:48:14 +0000 Subject: [Bugs] [Bug 1707700] New: maintain consistent values across for options when fetched at cluster level or volume level Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707700 Bug ID: 1707700 Summary: maintain consistent values across for options when fetched at cluster level or volume level Product: GlusterFS Version: mainline Status: NEW Component: cli Severity: low Priority: low Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, nchilaka at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, srakonde at redhat.com, storage-qa-internal at redhat.com Depends On: 1706776 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1706776 [Bug 1706776] maintain consistent values across for options when fetched at cluster level or volume level -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 06:49:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 06:49:21 +0000 Subject: [Bugs] [Bug 1707700] maintain consistent values across for options when fetched at cluster level or volume level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707700 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1706776 Depends On|1706776 | Assignee|bugs at gluster.org |amukherj at redhat.com --- Comment #1 from Atin Mukherjee --- Description of problem: ===================== A few options are showing different values when they are fetched at cluster level as against volume level. A consistent representation would be nice to have eg: [root at dhcp42-80 ~]# gluster v get test350 all|egrep "cluster.server-quorum-ratio|cluster.enable-shared-storage|cluster.op-version|cluster.max-op-version|cluster.brick-multiplex|cluster.max-bricks-per-process|glusterd.vol_count_per_thread|cluster.daemon-log-level" cluster.server-quorum-ratio 0 cluster.enable-shared-storage disable cluster.brick-multiplex off glusterd.vol_count_per_thread 100 cluster.max-bricks-per-process 250 cluster.daemon-log-level INFO [root at dhcp42-80 ~]# gluster v get all all Option Value ------ ----- cluster.server-quorum-ratio 51 cluster.enable-shared-storage disable cluster.op-version 70000 cluster.max-op-version 70000 cluster.brick-multiplex disable cluster.max-bricks-per-process 250 glusterd.vol_count_per_thread 100 cluster.daemon-log-level INFO In above example below are the descrepencies 1)It can be seen that "cluster.server-quorum-ratio"at cluster level shows 51% as against 0 at volume level 2) and "cluster.brick-multiplex " shows disable at cluster level as against off at volume level--> though both are effectively same, it would be good to either use disable or off at both places. No functional impact, and more of a cosmetic issue Version-Release number of selected component (if applicable): ================ mainline How reproducible: ================ always Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1706776 [Bug 1706776] maintain consistent values across for options when fetched at cluster level or volume level -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 06:52:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 06:52:39 +0000 Subject: [Bugs] [Bug 1707700] maintain consistent values across for options when fetched at cluster level or volume level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707700 --- Comment #2 from Atin Mukherjee --- RCA: There are two places where global cluster wide options are defined. (1) In valid_all_vol_opts structure & (2) In VME table in glusterd-volume-set.c . In 1 & 2 the default values don't match and hence this bug. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 06:54:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 06:54:52 +0000 Subject: [Bugs] [Bug 1707700] maintain consistent values across for options when fetched at cluster level or volume level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707700 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22680 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 06:54:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 06:54:53 +0000 Subject: [Bugs] [Bug 1707700] maintain consistent values across for options when fetched at cluster level or volume level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707700 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22680 (glusterd: fix inconsistent global option output in volume get) posted (#1) for review on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 07:45:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 07:45:19 +0000 Subject: [Bugs] [Bug 1702316] Cannot upgrade 5.x volume to 6.1 because of unused 'crypt' and 'bd' xlators In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702316 robdewit changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(rob.dewit at coosto. | |com) | --- Comment #3 from robdewit --- Hi, I tried upgrading one of the nodes again: 1) shutdown glusterd 5.6 2) install 6.1 3) start glusterd 6.1 4) no working brick 5) shutdown glusterd 6.1 6) downgrade to 5.6 7) start glusterd 5.6 8) brick is working fine again The volume status is showing only the other nodes as the node running 6.1 is failing the brick process: === START volume status === Status of volume: jf-vol0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.0.25:/local.mnt/glfs/brick 49153 0 Y 20952 Brick 10.10.0.208:/local.mnt/glfs/brick 49153 0 Y 29631 Self-heal Daemon on localhost N/A N/A Y 3487 Self-heal Daemon on 10.10.0.208 N/A N/A Y 27031 Task Status of Volume jf-vol0 ------------------------------------------------------------------------------ There are no active volume tasks === END volume status === === START glusterd.log === [2019-05-08 07:23:26.043605] I [MSGID: 100030] [glusterfsd.c:2849:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 6.1 (args: /usr/sbin/glusterd --pid-file=/run/glusterd.pid) [2019-05-08 07:23:26.044499] I [glusterfsd.c:2556:daemonize] 0-glusterfs: Pid of current running process is 21399 [2019-05-08 07:23:26.047235] I [MSGID: 106478] [glusterd.c:1422:init] 0-management: Maximum allowed open file descriptors set to 65536 [2019-05-08 07:23:26.047270] I [MSGID: 106479] [glusterd.c:1478:init] 0-management: Using /var/lib/glusterd as working directory [2019-05-08 07:23:26.047284] I [MSGID: 106479] [glusterd.c:1484:init] 0-management: Using /var/run/gluster as pid file working directory [2019-05-08 07:23:26.051068] I [socket.c:931:__socket_server_bind] 0-socket.management: process started listening on port (44950) [2019-05-08 07:23:26.051268] E [rpc-transport.c:297:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/6.1/rpc-transport/rdma.so: cannot open shared object file: No such file or directory [2019-05-08 07:23:26.051282] W [rpc-transport.c:301:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine [2019-05-08 07:23:26.051292] W [rpcsvc.c:1985:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed [2019-05-08 07:23:26.051302] E [MSGID: 106244] [glusterd.c:1785:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport [2019-05-08 07:23:26.053127] I [socket.c:902:__socket_server_bind] 0-socket.management: closing (AF_UNIX) reuse check socket 13 [2019-05-08 07:23:28.584285] I [MSGID: 106513] [glusterd-store.c:2394:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 50000 [2019-05-08 07:23:28.650177] I [MSGID: 106544] [glusterd.c:152:glusterd_uuid_init] 0-management: retrieved UUID: 5104ed01-f959-4a82-bbd6-17d4dd177ec2 [2019-05-08 07:23:28.656448] E [mem-pool.c:351:__gf_free] (-->/usr/lib64/glusterfs/6.1/xlator/mgmt/glusterd.so(+0x49190) [0x7fa26784e190] -->/usr/lib64/glusterfs/6.1/xlator/mgmt/glusterd.so(+0x48f72) [0x7fa26784df72] -->/usr/lib64/libglusterfs.so.0(__gf_free+0x21d) [0x7fa26d1f31dd] ) 0-: Assertion failed: mem_acct->rec[header->type].size >= header->size [2019-05-08 07:23:28.683589] I [MSGID: 106498] [glusterd-handler.c:3669:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2019-05-08 07:23:28.686748] I [MSGID: 106498] [glusterd-handler.c:3669:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2019-05-08 07:23:28.686787] W [MSGID: 106061] [glusterd-handler.c:3472:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout [2019-05-08 07:23:28.686819] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2019-05-08 07:23:28.687629] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 Final graph: +------------------------------------------------------------------------------+ 1: volume management 2: type mgmt/glusterd 3: option rpc-auth.auth-glusterfs on 4: option rpc-auth.auth-unix on 5: option rpc-auth.auth-null on 6: option rpc-auth-allow-insecure on 7: option transport.listen-backlog 1024 8: option event-threads 1 9: option ping-timeout 0 10: option transport.socket.read-fail-log off 11: option transport.socket.keepalive-interval 2 12: option transport.socket.keepalive-time 10 13: option transport-type rdma 14: option working-directory /var/lib/glusterd 15: end-volume 16: +------------------------------------------------------------------------------+ [2019-05-08 07:23:28.687625] W [MSGID: 106061] [glusterd-handler.c:3472:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout [2019-05-08 07:23:28.689771] I [MSGID: 101190] [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0 [2019-05-08 07:23:29.388437] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 88496e0c-298b-47ef-98a1-a884ca68d7d4, host: 10.10.0.208, port: 0 [2019-05-08 07:23:29.393409] I [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a fresh brick process for brick /local.mnt/glfs/brick [2019-05-08 07:23:29.395426] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2019-05-08 07:23:29.460728] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-nfs: setting frame-timeout to 600 [2019-05-08 07:23:29.460868] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: nfs already stopped [2019-05-08 07:23:29.460911] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: nfs service is stopped [2019-05-08 07:23:29.461360] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-glustershd: setting frame-timeout to 600 [2019-05-08 07:23:29.462857] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: glustershd already stopped [2019-05-08 07:23:29.462902] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: glustershd service is stopped [2019-05-08 07:23:29.462959] I [MSGID: 106567] [glusterd-svc-mgmt.c:220:glusterd_svc_start] 0-management: Starting glustershd service [2019-05-08 07:23:30.465107] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-quotad: setting frame-timeout to 600 [2019-05-08 07:23:30.465293] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: quotad already stopped [2019-05-08 07:23:30.465314] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: quotad service is stopped [2019-05-08 07:23:30.465351] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to 600 [2019-05-08 07:23:30.465477] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: bitd already stopped [2019-05-08 07:23:30.465489] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: bitd service is stopped [2019-05-08 07:23:30.465517] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-scrub: setting frame-timeout to 600 [2019-05-08 07:23:30.465633] I [MSGID: 106131] [glusterd-proc-mgmt.c:86:glusterd_proc_stop] 0-management: scrub already stopped [2019-05-08 07:23:30.465645] I [MSGID: 106568] [glusterd-svc-mgmt.c:253:glusterd_svc_stop] 0-management: scrub service is stopped [2019-05-08 07:23:30.465689] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600 [2019-05-08 07:23:30.465772] I [rpc-clnt.c:1005:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600 [2019-05-08 07:23:30.466776] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 88496e0c-298b-47ef-98a1-a884ca68d7d4 [2019-05-08 07:23:30.466822] I [MSGID: 106493] [glusterd-rpc-ops.c:468:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: a6ff7d5b-1e8d-4cdc-97cf-4e03b89462a3, host: 10.10.0.25, port: 0 [2019-05-08 07:23:30.490461] I [MSGID: 106493] [glusterd-rpc-ops.c:681:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: a6ff7d5b-1e8d-4cdc-97cf-4e03b89462a3 [2019-05-08 07:23:47.540967] I [MSGID: 106584] [glusterd-handler.c:5995:__glusterd_handle_get_state] 0-management: Received request to get state for glusterd [2019-05-08 07:23:47.541003] I [MSGID: 106061] [glusterd-handler.c:5517:glusterd_get_state] 0-management: Default output directory: /var/run/gluster/ [2019-05-08 07:23:47.541052] I [MSGID: 106061] [glusterd-handler.c:5553:glusterd_get_state] 0-management: Default filename: glusterd_state_20190508_092347 === END glusterd.log === === START glustershd.log === [2019-05-08 07:23:29.465963] I [MSGID: 100030] [glusterfsd.c:2849:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 6.1 (args: /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/dc47fa45e83d2326.socket --xlator-option *replicate*.node-uuid=5104ed01-f959-4a82-bbd6-17d4dd177ec2 --process-name glustershd --client-pid=-6) [2019-05-08 07:23:29.466783] I [glusterfsd.c:2556:daemonize] 0-glusterfs: Pid of current running process is 29165 [2019-05-08 07:23:29.469726] I [socket.c:902:__socket_server_bind] 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 10 [2019-05-08 07:23:29.471280] I [MSGID: 101190] [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0 [2019-05-08 07:23:29.471317] I [glusterfsd-mgmt.c:2443:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: localhost [2019-05-08 07:23:29.471326] I [glusterfsd-mgmt.c:2463:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2019-05-08 07:23:29.471518] I [MSGID: 101190] [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-05-08 07:23:29.471540] W [glusterfsd.c:1570:cleanup_and_exit] (-->/usr/lib64/libgfrpc.so.0(+0xe7b3) [0x7f8e5adb37b3] -->/usr/sbin/glusterfs() [0x411629] -->/usr/sbin/glusterfs(cleanup_and_exit+0x57) [0x409db7] ) 0-: received signum (1), shutting down === END glustershd.log === === START local.mnt-glfs-brick.log === [2019-05-08 07:23:29.396753] I [MSGID: 100030] [glusterfsd.c:2849:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 6.1 (args: /usr/sbin/glusterfsd -s 10.10.0.177 --volfile-id jf-vol0.10.10.0.177.local.mnt-glfs-brick -p /var/run/gluster/vols/jf-vol0/10.10.0.177-local.mnt-glfs-brick.pid -S /var/run/gluster/ccdac309d72f1df7.socket --brick-name /local.mnt/glfs/brick -l /var/log/glusterfs/bricks/local.mnt-glfs-brick.log --xlator-option *-posix.glusterd-uuid=5104ed01-f959-4a82-bbd6-17d4dd177ec2 --process-name brick --brick-port 49153 --xlator-option jf-vol0-server.listen-port=49153) [2019-05-08 07:23:29.397519] I [glusterfsd.c:2556:daemonize] 0-glusterfs: Pid of current running process is 28996 [2019-05-08 07:23:29.400575] I [socket.c:902:__socket_server_bind] 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 10 [2019-05-08 07:23:29.401901] I [MSGID: 101190] [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-05-08 07:23:29.402622] I [MSGID: 101190] [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0 [2019-05-08 07:23:29.402631] I [glusterfsd-mgmt.c:2443:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: 10.10.0.177 [2019-05-08 07:23:29.402649] I [glusterfsd-mgmt.c:2463:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2019-05-08 07:23:29.402770] W [glusterfsd.c:1570:cleanup_and_exit] (-->/usr/lib64/libgfrpc.so.0(+0xe7b3) [0x7fe46b1f77b3] -->/usr/sbin/glusterfsd() [0x411629] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x57) [0x409db7] ) 0-: received signum (1), shutting down [2019-05-08 07:23:29.403338] I [socket.c:3754:socket_submit_outgoing_msg] 0-glusterfs: not connected (priv->connected = 0) [2019-05-08 07:23:29.403353] W [rpc-clnt.c:1704:rpc_clnt_submit] 0-glusterfs: failed to submit rpc-request (unique: 0, XID: 0x2 Program: Gluster Portmap, ProgVers: 1, Proc: 5) to rpc-transport (glusterfs) [2019-05-08 07:23:29.403420] W [glusterfsd.c:1570:cleanup_and_exit] (-->/usr/lib64/libgfrpc.so.0(+0xe7b3) [0x7fe46b1f77b3] -->/usr/sbin/glusterfsd() [0x411629] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x57) [0x409db7] ) 0-: received signum (1), shutting down === END local.mnt-glfs-brick.log === === START glusterd_state_20190508_092347 === [Global] MYUUID: 5104ed01-f959-4a82-bbd6-17d4dd177ec2 op-version: 50000 [Global options] [Peers] Peer1.primary_hostname: 10.10.0.208 Peer1.uuid: 88496e0c-298b-47ef-98a1-a884ca68d7d4 Peer1.state: Peer in Cluster Peer1.connected: Connected Peer1.othernames: Peer2.primary_hostname: 10.10.0.25 Peer2.uuid: a6ff7d5b-1e8d-4cdc-97cf-4e03b89462a3 Peer2.state: Peer in Cluster Peer2.connected: Connected Peer2.othernames: [Volumes] Volume1.name: jf-vol0 Volume1.id: f90d35dd-b2a4-461b-9ae9-dcfc68dac322 Volume1.type: Replicate Volume1.transport_type: tcp Volume1.status: Started Volume1.profile_enabled: 0 Volume1.brickcount: 3 Volume1.Brick1.path: 10.10.0.177:/local.mnt/glfs/brick Volume1.Brick1.hostname: 10.10.0.177 Volume1.Brick1.port: 49153 Volume1.Brick1.rdma_port: 0 Volume1.Brick1.port_registered: 0 Volume1.Brick1.status: Stopped Volume1.Brick1.spacefree: 1891708428288Bytes Volume1.Brick1.spacetotal: 1891966050304Bytes Volume1.Brick2.path: 10.10.0.25:/local.mnt/glfs/brick Volume1.Brick2.hostname: 10.10.0.25 Volume1.Brick3.path: 10.10.0.208:/local.mnt/glfs/brick Volume1.Brick3.hostname: 10.10.0.208 Volume1.snap_count: 0 Volume1.stripe_count: 1 Volume1.replica_count: 3 Volume1.subvol_count: 1 Volume1.arbiter_count: 0 Volume1.disperse_count: 0 Volume1.redundancy_count: 0 Volume1.quorum_status: not_applicable Volume1.snapd_svc.online_status: Offline Volume1.snapd_svc.inited: True Volume1.rebalance.id: 00000000-0000-0000-0000-000000000000 Volume1.rebalance.status: not_started Volume1.rebalance.failures: 0 Volume1.rebalance.skipped: 0 Volume1.rebalance.lookedup: 0 Volume1.rebalance.files: 0 Volume1.rebalance.data: 0Bytes Volume1.time_left: 0 Volume1.gsync_count: 0 Volume1.options.cluster.readdir-optimize: on Volume1.options.cluster.self-heal-daemon: enable Volume1.options.cluster.lookup-optimize: on Volume1.options.network.inode-lru-limit: 200000 Volume1.options.performance.md-cache-timeout: 600 Volume1.options.performance.cache-invalidation: on Volume1.options.performance.stat-prefetch: on Volume1.options.features.cache-invalidation-timeout: 600 Volume1.options.features.cache-invalidation: on Volume1.options.diagnostics.brick-sys-log-level: INFO Volume1.options.diagnostics.brick-log-level: INFO Volume1.options.diagnostics.client-log-level: INFO Volume1.options.transport.address-family: inet Volume1.options.nfs.disable: on Volume1.options.performance.client-io-threads: off [Services] svc1.name: glustershd svc1.online_status: Offline svc2.name: nfs svc2.online_status: Offline svc3.name: bitd svc3.online_status: Offline svc4.name: scrub svc4.online_status: Offline svc5.name: quotad svc5.online_status: Offline [Misc] Base port: 49152 Last allocated port: 49153 === END glusterd_state_20190508_092347 === -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 07:49:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 07:49:03 +0000 Subject: [Bugs] [Bug 1707227] glusterfsd memory leak after enable tls/ssl In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707227 --- Comment #1 from zhou lin --- thanks for your respond! glusterfsd process does call SSL_free interface, however, the ssl context is a shared one between many ssl object. do you think it is possible that if we keep the shared ssl context will cause this memory leak? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 08:09:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 08:09:03 +0000 Subject: [Bugs] [Bug 1706893] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706893 Rahul Hinduja changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rhinduja at redhat.com Blocks| |1696807 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 08:09:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 08:09:06 +0000 Subject: [Bugs] [Bug 1706893] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706893 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: Auto pm_ack+ for | |devel & qe approved BZs at | |RHGS 3.5.0 Rule Engine Rule| |665 Target Release|--- |RHGS 3.5.0 Rule Engine Rule| |666 Rule Engine Rule| |327 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 08:23:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 08:23:10 +0000 Subject: [Bugs] [Bug 1707728] New: geo-rep: Sync hangs with tarssh as sync-engine Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707728 Bug ID: 1707728 Summary: geo-rep: Sync hangs with tarssh as sync-engine Product: GlusterFS Version: mainline Status: NEW Component: geo-replication Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: When the heavy workload as below on master, the sync is hung with sync engine tarssh. It's working fine with rsync as sync engine. for i in {1..10000} do echo "sample data" > //file$i mv -f //file$i / 3. Start geo-rep and wait till the status is changelog crawl 4. Configure sync-jobs to 1 gluster vol geo-rep :: config sync-jobs 1 5. Configure sync engine to tarssh gluster vol geo-rep :: config sync-method tarssh 6. Stop the geo-rep 7. Do the I/O on mastermnt as mentioned for i in {1..10000} do echo "sample data" > //file$i mv -f //file$i / References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707728 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 08:30:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 08:30:23 +0000 Subject: [Bugs] [Bug 1707731] New: [Upgrade] Config files are not upgraded to new version Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707731 Bug ID: 1707731 Summary: [Upgrade] Config files are not upgraded to new version Product: GlusterFS Version: mainline Status: NEW Component: geo-replication Assignee: bugs at gluster.org Reporter: avishwan at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Configuration handling was enhanced with patch https://review.gluster.org/#/c/glusterfs/+/18257/, Old configurations are not applied if Geo-rep session is created in the old version and upgraded. Actual results: All configurations reset when upgraded. Expected results: Configuration should be upgraded to the new format when Geo-replication is run for the first time after the upgrade. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 08:30:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 08:30:39 +0000 Subject: [Bugs] [Bug 1707731] [Upgrade] Config files are not upgraded to new version In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707731 Aravinda VK changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |avishwan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 08:46:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 08:46:17 +0000 Subject: [Bugs] [Bug 1705884] Image size as reported from the fuse mount is incorrect In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705884 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22681 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 08:46:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 08:46:18 +0000 Subject: [Bugs] [Bug 1705884] Image size as reported from the fuse mount is incorrect In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705884 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22681 (features/shard: Fix block-count accounting upon truncate to lower size) posted (#1) for review on master by Krutika Dhananjay -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 08:49:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 08:49:37 +0000 Subject: [Bugs] [Bug 1707742] New: tests/geo-rep: arequal checksum comparison always succeeds Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707742 Bug ID: 1707742 Summary: tests/geo-rep: arequal checksum comparison always succeeds Product: GlusterFS Version: mainline Status: NEW Component: tests Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: arequal checksum comparison always succeeds in all geo-rep test cases. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: Run any geo-rep test case with bash -x bash -x tests/00-geo-rep/georep-basic-dr-tarssh.t + _EXPECT_WITHIN 109 120 0 arequal_checksum /mnt/glusterfs/0 /mnt/glusterfs/1 + TESTLINE=109 .. .. + dbg 'TEST 35 (line 109): 0 arequal_checksum /mnt/glusterfs/0 /mnt/glusterfs/1' + '[' x0 = x0 ']' + saved_cmd='0 arequal_checksum /mnt/glusterfs/0 /mnt/glusterfs/1' + e=0 + a= + shift ++ date +%s + local endtime=1557300038 + EW_RETRIES=0 ++ date +%s + '[' 1557299918 -lt 1557300038 ']' ++ tail -1 ++ arequal_checksum /mnt/glusterfs/0 /mnt/glusterfs/1 ++ master=/mnt/glusterfs/0 ++ slave=/mnt/glusterfs/1 ++ wc -l ++ diff /dev/fd/63 /dev/fd/62 +++ arequal-checksum -p /mnt/glusterfs/1 +++ arequal-checksum -p /mnt/glusterfs/0 ++ exit 0 + a=20 + '[' 0 -ne 0 ']' + [[ 20 =~ 0 ]] <<<< Even though it's not equal it went to break in first iteration + break Actual results: arequal exists on first call even if it's not matched. Expected results: arequal should exists only on successful match Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 08:56:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 08:56:33 +0000 Subject: [Bugs] [Bug 1707742] tests/geo-rep: arequal checksum comparison always succeeds In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707742 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22682 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 08:56:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 08:56:34 +0000 Subject: [Bugs] [Bug 1707742] tests/geo-rep: arequal checksum comparison always succeeds In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707742 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22682 (tests/geo-rep: Fix arequal checksum comparison) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 08:58:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 08:58:26 +0000 Subject: [Bugs] [Bug 1707728] geo-rep: Sync hangs with tarssh as sync-engine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707728 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22684 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 08:58:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 08:58:27 +0000 Subject: [Bugs] [Bug 1707728] geo-rep: Sync hangs with tarssh as sync-engine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707728 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22684 (geo-rep: Fix sync hang with tarssh) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 09:03:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 09:03:44 +0000 Subject: [Bugs] [Bug 1707746] New: AFR-v2 does not log before attempting data self-heal Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707746 Bug ID: 1707746 Summary: AFR-v2 does not log before attempting data self-heal Product: GlusterFS Version: 4.1 Status: NEW Component: replicate Severity: low Assignee: bugs at gluster.org Reporter: ravishankar at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community afr: log before attempting data self-heal. I was working on a blog about troubleshooting AFR issues and I wanted to copy the messages logged by self-heal for my blog. I then realized that AFR-v2 is not logging *before* attempting data heal while it logs it for metadata and entry heals. I [MSGID: 108026] [afr-self-heal-entry.c:883:afr_selfheal_entry_do] 0-testvol-replicate-0: performing entry selfheal on d120c0cf-6e87-454b-965b-0d83a4c752bb I [MSGID: 108026] [afr-self-heal-common.c:1741:afr_log_selfheal] 0-testvol-replicate-0: Completed entry selfheal on d120c0cf-6e87-454b-965b-0d83a4c752bb. sources=[0] 2 sinks=1 I [MSGID: 108026] [afr-self-heal-common.c:1741:afr_log_selfheal] 0-testvol-replicate-0: Completed data selfheal on a9b5f183-21eb-4fb3-a342-287d3a7dddc5. sources=[0] 2 sinks=1 I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-testvol-replicate-0: performing metadata selfheal on a9b5f183-21eb-4fb3-a342-287d3a7dddc5 I [MSGID: 108026] [afr-self-heal-common.c:1741:afr_log_selfheal] 0-testvol-replicate-0: Completed metadata selfheal on a9b5f183-21eb-4fb3-a342-287d3a7dddc5. sources=[0] 2 sinks=1 Adding it in this patch. Now there is a 'performing' and a corresponding 'Completed' message for every type of heal. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 09:04:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 09:04:03 +0000 Subject: [Bugs] [Bug 1707746] AFR-v2 does not log before attempting data self-heal In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707746 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Status|NEW |ASSIGNED Version|4.1 |mainline Assignee|bugs at gluster.org |ravishankar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 09:07:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 09:07:42 +0000 Subject: [Bugs] [Bug 1707746] AFR-v2 does not log before attempting data self-heal In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707746 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22685 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 09:07:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 09:07:43 +0000 Subject: [Bugs] [Bug 1707746] AFR-v2 does not log before attempting data self-heal In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707746 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22685 (afr: log before attempting data self-heal.) posted (#1) for review on master by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 10:29:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 10:29:41 +0000 Subject: [Bugs] [Bug 1679401] Geo-rep setup creates an incorrectly formatted authorized_keys file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679401 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-02-25 05:21:50 |2019-05-08 10:29:41 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22673 (geo-rep: fix incorrectly formatted authorized_keys) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 10:32:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 10:32:17 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #37 from Worker Ant --- REVIEW: https://review.gluster.org/22458 (tests: enhance the auth.allow test to validate all failures of 'login' module) merged (#5) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 13:26:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 13:26:59 +0000 Subject: [Bugs] [Bug 1707746] AFR-v2 does not log before attempting data self-heal In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707746 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-08 13:26:59 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22685 (afr: log before attempting data self-heal.) merged (#2) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 13:31:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 13:31:33 +0000 Subject: [Bugs] [Bug 1694139] Error waiting for job 'heketi-storage-copy-job' to complete on one-node k3s deployment. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694139 Assen Sharlandjiev changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |assen.sharlandjiev at gmail.co | |m --- Comment #3 from Assen Sharlandjiev --- Hi, I ran into the same problem. Checked the syslog on the k3s host node: #tail -f /var/log/syslog shows May 8 16:26:53 k3s-node2 k3s[922]: E0508 16:26:53.466167 922 desired_state_of_world_populator.go:298] Failed to add volume "heketi-storage" (specName: "heketi-storage") for pod "9a3ec318-718e-11e9-9557-3e1cb9b46815" to desiredStateOfWorld. err=failed to get Plugin from volumeSpec for volume "heketi-storage" err=no volume plugin matched May 8 16:26:53 k3s-node2 k3s[922]: E0508 16:26:53.569733 922 desired_state_of_world_populator.go:298] Failed to add volume "heketi-storage" (specName: "heketi-storage") for pod "9a3ec318-718e-11e9-9557-3e1cb9b46815" to desiredStateOfWorld. err=failed to get Plugin from volumeSpec for volume "heketi-storage" err=no volume plugin matched I guess we are missing something in the k3s agent node. hope this info helps. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 13:55:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 13:55:13 +0000 Subject: [Bugs] [Bug 1702271] Memory accounting information is not always accurate In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702271 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-08 13:55:13 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22607 (core: handle memory accounting correctly) merged (#2) on release-6 by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 13:55:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 13:55:37 +0000 Subject: [Bugs] [Bug 1699917] I/O error on writes to a disperse volume when replace-brick is executed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699917 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-08 13:55:37 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22608 (cluster/ec: fix fd reopen) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 13:55:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 13:55:38 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Bug 1692394 depends on bug 1699917, which changed state. Bug 1699917 Summary: I/O error on writes to a disperse volume when replace-brick is executed https://bugzilla.redhat.com/show_bug.cgi?id=1699917 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 13:56:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 13:56:51 +0000 Subject: [Bugs] [Bug 1701818] Syntactical errors in hook scripts for managing SELinux context on bricks #2 (S10selinux-label-brick.sh + S10selinux-del-fcontext.sh) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701818 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-08 13:56:51 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22594 (extras/hooks: syntactical errors in SELinux hooks, scipt logic improved) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 13:57:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 13:57:48 +0000 Subject: [Bugs] [Bug 1702734] ctime: Logs are flooded with "posix set mdata failed, No ctime" error during open In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702734 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-08 13:57:48 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22614 (ctime: Fix log repeated logging during open) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 13:58:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 13:58:10 +0000 Subject: [Bugs] [Bug 1703759] statedump is not capturing info related to glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703759 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-08 13:58:10 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22641 (glusterd: define dumpops in the xlator_api of glusterd) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 13:58:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 13:58:11 +0000 Subject: [Bugs] [Bug 1701203] GlusterFS 6.2 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701203 Bug 1701203 depends on bug 1703759, which changed state. Bug 1703759 Summary: statedump is not capturing info related to glusterd https://bugzilla.redhat.com/show_bug.cgi?id=1703759 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 14:00:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 14:00:37 +0000 Subject: [Bugs] [Bug 1707393] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707393 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22676 (cluster/dht: refactor dht lookup functions) merged (#3) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 14:02:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 14:02:11 +0000 Subject: [Bugs] [Bug 1702316] Cannot upgrade 5.x volume to 6.1 because of unused 'crypt' and 'bd' xlators In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702316 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(rob.dewit at coosto. | |com) --- Comment #4 from Atin Mukherjee --- [2019-05-08 07:23:29.471317] I [glusterfsd-mgmt.c:2443:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: localhost [2019-05-08 07:23:29.471326] I [glusterfsd-mgmt.c:2463:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers The above two logs from the brick log file are the cause. It appears that brick is unable to talk to glusterd. Could you please check what's the content of glusterd.vol file in this node (please locate the file and do paste the 'cat glusterd.vol' output) ? Do you see an entry 'option transport.socket.listen-port 24007' in the glusterd.vol file? If not, could you add that, restart the node and see if that makes any difference? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 14:06:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 14:06:25 +0000 Subject: [Bugs] [Bug 1695399] With parallel-readdir enabled, deleting a directory containing stale linkto files fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1695399 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-08 14:06:25 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22485 (cluster/dht: Request linkto xattrs in dht_rmdir opendir) merged (#3) on release-5 by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 8 14:07:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 14:07:44 +0000 Subject: [Bugs] [Bug 1699736] Fops hang when inodelk fails on the first fop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699736 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-08 14:07:44 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22567 (cluster/afr: Remove local from owners_list on failure of lock-acquisition) merged (#2) on release-5 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 14:08:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 14:08:28 +0000 Subject: [Bugs] [Bug 1707198] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707198 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-08 14:08:28 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22669 (performance/write-behind: remove request from wip list in wb_writev_cbk) merged (#2) on release-5 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 14:08:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 14:08:29 +0000 Subject: [Bugs] [Bug 1707195] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707195 Bug 1707195 depends on bug 1707198, which changed state. Bug 1707198 Summary: VM stuck in a shutdown because of a pending fuse request https://bugzilla.redhat.com/show_bug.cgi?id=1707198 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 14:08:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 14:08:29 +0000 Subject: [Bugs] [Bug 1707200] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707200 Bug 1707200 depends on bug 1707198, which changed state. Bug 1707198 Summary: VM stuck in a shutdown because of a pending fuse request https://bugzilla.redhat.com/show_bug.cgi?id=1707198 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 14:21:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 14:21:06 +0000 Subject: [Bugs] [Bug 1699500] fix truncate lock to cover the write in tuncate clean In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699500 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22560 (ec: fix truncate lock to cover the write in tuncate clean) merged (#2) on release-5 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 15:05:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 15:05:43 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22679 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 15:05:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 15:05:45 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #652 from Worker Ant --- REVIEW: https://review.gluster.org/22679 (glusterd: improve logging in __server_getspec()) merged (#4) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 15:08:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 15:08:52 +0000 Subject: [Bugs] [Bug 1702316] Cannot upgrade 5.x volume to 6.1 because of unused 'crypt' and 'bd' xlators In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702316 robdewit changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |NOTABUG Flags|needinfo?(rob.dewit at coosto. | |com) | Last Closed| |2019-05-08 15:08:52 --- Comment #5 from robdewit --- That was it! The brick now starts up OK. Thanks a lot! === START glusterd.vol === volume management type mgmt/glusterd option working-directory /var/lib/glusterd option transport-type socket,rdma option transport.socket.keepalive-time 10 option transport.socket.keepalive-interval 2 option transport.socket.read-fail-log off # Adding this line made it work: option transport.socket.listen-port 24007 option ping-timeout 0 option event-threads 1 # option transport.address-family inet6 # option base-port 49152 end-volume === END glusterd.vol === -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 8 15:11:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 08 May 2019 15:11:44 +0000 Subject: [Bugs] [Bug 1707866] New: Thousands of duplicate files in glusterfs mountpoint directory listing Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707866 Bug ID: 1707866 Summary: Thousands of duplicate files in glusterfs mountpoint directory listing Product: GlusterFS Version: 4.1 Hardware: x86_64 OS: Linux Status: NEW Component: core Severity: high Assignee: bugs at gluster.org Reporter: sergemp at mail.ru CC: bugs at gluster.org Target Milestone: --- Classification: Community I have something impossible: same filenames are listed multiple times: # ls -la /mnt/VOLNAME/ ... -rwxrwxr-x 1 root root 3486 Jan 28 2016 check_connections.pl -rwxr-xr-x 1 root root 153 Dec 7 2014 sigtest.sh -rwxr-xr-x 1 root root 153 Dec 7 2014 sigtest.sh -rwxr-xr-x 1 root root 3466 Jan 5 2015 zabbix.pm -rwxr-xr-x 1 root root 3466 Jan 5 2015 zabbix.pm There're about 38981 duplicate files like that. The volume itself is a 3 x 2-replica: # gluster volume info VOLNAME Volume Name: VOLNAME Type: Distributed-Replicate Volume ID: 41f9096f-0d5f-4ea9-b369-89294cf1be99 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: gfserver1:/srv/BRICK Brick2: gfserver2:/srv/BRICK Brick3: gfserver3:/srv/BRICK Brick4: gfserver4:/srv/BRICK Brick5: gfserver5:/srv/BRICK Brick6: gfserver6:/srv/BRICK Options Reconfigured: transport.address-family: inet nfs.disable: on cluster.self-heal-daemon: enable config.transport: tcp The "duplicated" file on individual bricks: [gfserver1]# ls -la /srv/BRICK/zabbix.pm ---------T 2 root root 0 Apr 23 2018 /srv/BRICK/zabbix.pm [gfserver2]# ls -la /srv/BRICK/zabbix.pm ---------T 2 root root 0 Apr 23 2018 /srv/BRICK/zabbix.pm [gfserver3]# ls -la /srv/BRICK/zabbix.pm -rwxr-xr-x 2 root root 3466 Jan 5 2015 /srv/BRICK/zabbix.pm [gfserver4]# ls -la /srv/BRICK/zabbix.pm -rwxr-xr-x 2 root root 3466 Jan 5 2015 /srv/BRICK/zabbix.pm [gfserver5]# ls -la /srv/BRICK/zabbix.pm -rwxr-xr-x 2 root root 3466 Jan 5 2015 /srv/BRICK/zabbix.pm [gfserver6]# ls -la /srv/BRICK/zabbix.pm -rwxr-xr-x. 2 root root 3466 Jan 5 2015 /srv/BRICK/zabbix.pm Attributes: [gfserver1]# getfattr -m . -d -e hex /srv/BRICK/zabbix.pm # file: srv/BRICK/zabbix.pm trusted.afr.VOLNAME-client-1=0x000000000000000000000000 trusted.afr.VOLNAME-client-4=0x000000000000000000000000 trusted.gfid=0x422a7ccf018242b58e162a65266326c3 trusted.glusterfs.dht.linkto=0x6678666565642d7265706c69636174652d3100 [gfserver2]# getfattr -m . -d -e hex /srv/BRICK/zabbix.pm # file: srv/BRICK/zabbix.pm trusted.gfid=0x422a7ccf018242b58e162a65266326c3 trusted.gfid2path.3b27d24cad4dceef=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f7a61626269782e706d trusted.glusterfs.dht.linkto=0x6678666565642d7265706c69636174652d3100 [gfserver3]# getfattr -m . -d -e hex /srv/BRICK/zabbix.pm # file: srv/BRICK/zabbix.pm trusted.afr.VOLNAME-client-2=0x000000000000000000000000 trusted.afr.VOLNAME-client-3=0x000000000000000000000000 trusted.gfid=0x422a7ccf018242b58e162a65266326c3 [gfserver4]# getfattr -m . -d -e hex /srv/BRICK/zabbix.pm # file: srv/BRICK/zabbix.pm trusted.gfid=0x422a7ccf018242b58e162a65266326c3 trusted.gfid2path.3b27d24cad4dceef=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f7a61626269782e706d [gfserver5]# getfattr -m . -d -e hex /srv/BRICK/zabbix.pm # file: srv/BRICK/zabbix.pm trusted.bit-rot.version=0x03000000000000005c4f813c000bc71b trusted.gfid=0x422a7ccf018242b58e162a65266326c3 [gfserver6]# getfattr -m . -d -e hex /srv/BRICK/zabbix.pm # file: srv/BRICK/zabbix.pm security.selinux=0x73797374656d5f753a6f626a6563745f723a7661725f743a733000 trusted.bit-rot.version=0x02000000000000005add0ffc000eb66a trusted.gfid=0x422a7ccf018242b58e162a65266326c3 Not sure why exactly it happened... Maybe because some nodes were suddenly upgraded from centos6's gluster ~3.7 to centos7's 4.1, and some files happened to be on nodes that they're not supposed to be on. Currently all the nodes are online: # gluster pool list UUID Hostname State aac9e1a5-018f-4d27-9d77-804f0f1b2f13 gfserver5 Connected 98b22070-b579-4a91-86e3-482cfcc9c8cf gfserver3 Connected 7a9841a1-c63c-49f2-8d6d-a90ae2ff4e04 gfserver4 Connected 955f5551-8b42-476c-9eaa-feab35b71041 gfserver6 Connected 7343d655-3527-4bcf-9d13-55386ccb5f9c gfserver1 Connected f9c79a56-830d-4056-b437-a669a1942626 gfserver2 Connected 45a72ab3-b91e-4076-9cf2-687669647217 localhost Connected and have glusterfs-3.12.14-1.el6.x86_64 (Centos 6) and glusterfs-4.1.7-1.el7.x86_64 (Centos 7) installed. Expected result --------------- This looks like a layout issue, so: gluster volume rebalance VOLNAME fix-layout start should fix it, right? Actual result ------------- I tried: gluster volume rebalance VOLNAME fix-layout start gluster volume rebalance VOLNAME start gluster volume rebalance VOLNAME start force gluster volume heal VOLNAME full Those took 5 to 40 minutes to complete, but the duplicates are still there. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 02:59:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 02:59:42 +0000 Subject: [Bugs] [Bug 1707227] glusterfsd memory leak after enable tls/ssl In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707227 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22687 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 02:59:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 02:59:43 +0000 Subject: [Bugs] [Bug 1707227] glusterfsd memory leak after enable tls/ssl In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707227 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22687 (After enabling TLS, glusterfsd memory leak found) posted (#1) for review on master by None -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 03:34:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 03:34:17 +0000 Subject: [Bugs] [Bug 1706893] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706893 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 04:13:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 04:13:53 +0000 Subject: [Bugs] [Bug 1708047] New: glusterfsd memory leak after enable tls/ssl Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708047 Bug ID: 1708047 Summary: glusterfsd memory leak after enable tls/ssl Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: rpc Severity: high Assignee: bugs at gluster.org Reporter: rgowdapp at redhat.com CC: bugs at gluster.org, zz.sh.cynthia at gmail.com Depends On: 1707227 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1707227 +++ Description of problem: glusterfsd memory leak found Version-Release number of selected component (if applicable): 3.12.15 How reproducible: while true;do gluster v heal info;done and open another session to check the memory usage of the related glusterfsd process, the memory will keep increasing until around 370M then increase will stop Steps to Reproduce: 1.while true;do gluster v heal info;done 2.check the memory usage of the related glusterfsd process 3. Actual results: the memory will keep increasing until around 370M then increase will stop Expected results: memory stable Additional info: with memory scan tool vlagrand attached to glusterfsd process and libleak attached to glusterfsd process seems ssl_accept is suspicious, not sure it is caused by ssl_accept or glusterfs mis-use of ssl: ==16673== 198,720 bytes in 12 blocks are definitely lost in loss record 1,114 of 1,123 ==16673== at 0x4C2EB7B: malloc (vg_replace_malloc.c:299) ==16673== by 0x63E1977: CRYPTO_malloc (in /usr/lib64/libcrypto.so.1.0.2p) ==16673== by 0xA855E0C: ssl3_setup_write_buffer (in /usr/lib64/libssl.so.1.0.2p) ==16673== by 0xA855E77: ssl3_setup_buffers (in /usr/lib64/libssl.so.1.0.2p) ==16673== by 0xA8485D9: ssl3_accept (in /usr/lib64/libssl.so.1.0.2p) ==16673== by 0xA610DDF: ssl_complete_connection (socket.c:400) ==16673== by 0xA617F38: ssl_handle_server_connection_attempt (socket.c:2409) ==16673== by 0xA618420: socket_complete_connection (socket.c:2554) ==16673== by 0xA618788: socket_event_handler (socket.c:2613) ==16673== by 0x4ED6983: event_dispatch_epoll_handler (event-epoll.c:587) ==16673== by 0x4ED6C5A: event_dispatch_epoll_worker (event-epoll.c:663) ==16673== by 0x615C5D9: start_thread (in /usr/lib64/libpthread-2.27.so) ==16673== ==16673== 200,544 bytes in 12 blocks are definitely lost in loss record 1,115 of 1,123 ==16673== at 0x4C2EB7B: malloc (vg_replace_malloc.c:299) ==16673== by 0x63E1977: CRYPTO_malloc (in /usr/lib64/libcrypto.so.1.0.2p) ==16673== by 0xA855D12: ssl3_setup_read_buffer (in /usr/lib64/libssl.so.1.0.2p) ==16673== by 0xA855E68: ssl3_setup_buffers (in /usr/lib64/libssl.so.1.0.2p) ==16673== by 0xA8485D9: ssl3_accept (in /usr/lib64/libssl.so.1.0.2p) ==16673== by 0xA610DDF: ssl_complete_connection (socket.c:400) ==16673== by 0xA617F38: ssl_handle_server_connection_attempt (socket.c:2409) ==16673== by 0xA618420: socket_complete_connection (socket.c:2554) ==16673== by 0xA618788: socket_event_handler (socket.c:2613) ==16673== by 0x4ED6983: event_dispatch_epoll_handler (event-epoll.c:587) ==16673== by 0x4ED6C5A: event_dispatch_epoll_worker (event-epoll.c:663) ==16673== by 0x615C5D9: start_thread (in /usr/lib64/libpthread-2.27.so) ==16673== valgrind --leak-check=f also, with another memory leak scan tool libleak: callstack[2419] expires. count=1 size=224/224 alloc=362 free=350 /home/robot/libleak/libleak.so(malloc+0x25) [0x7f1460604065] /lib64/libcrypto.so.10(CRYPTO_malloc+0x58) [0x7f145ecd9978] /lib64/libcrypto.so.10(EVP_DigestInit_ex+0x2a9) [0x7f145ed95749] /lib64/libssl.so.10(ssl3_digest_cached_records+0x11d) [0x7f145abb6ced] /lib64/libssl.so.10(ssl3_accept+0xc8f) [0x7f145abadc4f] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(ssl_complete_connection+0x5e) [0x7f145ae00f3a] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(+0xc16d) [0x7f145ae0816d] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(+0xc68a) [0x7f145ae0868a] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(+0xc9f2) [0x7f145ae089f2] /lib64/libglusterfs.so.0(+0x9b96f) [0x7f146038596f] /lib64/libglusterfs.so.0(+0x9bc46) [0x7f1460385c46] /lib64/libpthread.so.0(+0x75da) [0x7f145f0d15da] /lib64/libc.so.6(clone+0x3f) [0x7f145e9a7eaf] callstack[2432] expires. count=1 size=104/104 alloc=362 free=0 /home/robot/libleak/libleak.so(malloc+0x25) [0x7f1460604065] /lib64/libcrypto.so.10(CRYPTO_malloc+0x58) [0x7f145ecd9978] /lib64/libcrypto.so.10(BN_MONT_CTX_new+0x17) [0x7f145ed48627] /lib64/libcrypto.so.10(BN_MONT_CTX_set_locked+0x6d) [0x7f145ed489fd] /lib64/libcrypto.so.10(+0xff4d9) [0x7f145ed6a4d9] /lib64/libcrypto.so.10(int_rsa_verify+0x1cd) [0x7f145ed6d41d] /lib64/libcrypto.so.10(RSA_verify+0x32) [0x7f145ed6d972] /lib64/libcrypto.so.10(+0x107ff5) [0x7f145ed72ff5] /lib64/libcrypto.so.10(EVP_VerifyFinal+0x211) [0x7f145ed9dd51] /lib64/libssl.so.10(ssl3_get_cert_verify+0x5bb) [0x7f145abac06b] /lib64/libssl.so.10(ssl3_accept+0x988) [0x7f145abad948] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(ssl_complete_connection+0x5e) [0x7f145ae00f3a] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(+0xc16d) [0x7f145ae0816d] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(+0xc68a) [0x7f145ae0868a] /usr/lib64/glusterfs/3.12.15/rpc-transport/socket.so(+0xc9f2) [0x7f145ae089f2] /lib64/libglusterfs.so.0(+0x9b96f) [0x7f146038596f] /lib64/libglusterfs.so.0(+0x9bc46) [0x7f1460385c46] /lib64/libpthread.so.0(+0x75da) [0x7f145f0d15da] /lib64/libc.so.6(clone+0x3f) [0x7f145e9a7eaf] --- Additional comment from zhou lin on 2019-05-08 07:49:03 UTC --- thanks for your respond! glusterfsd process does call SSL_free interface, however, the ssl context is a shared one between many ssl object. do you think it is possible that if we keep the shared ssl context will cause this memory leak? --- Additional comment from Worker Ant on 2019-05-09 02:59:43 UTC --- REVIEW: https://review.gluster.org/22687 (After enabling TLS, glusterfsd memory leak found) posted (#1) for review on master by None Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1707227 [Bug 1707227] glusterfsd memory leak after enable tls/ssl -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 04:13:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 04:13:53 +0000 Subject: [Bugs] [Bug 1707227] glusterfsd memory leak after enable tls/ssl In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707227 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1708047 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1708047 [Bug 1708047] glusterfsd memory leak after enable tls/ssl -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 04:18:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 04:18:50 +0000 Subject: [Bugs] [Bug 1707227] glusterfsd memory leak after enable tls/ssl In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707227 --- Comment #3 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22687 (rpc/socket: After enabling TLS, glusterfsd memory leak found) posted (#2) for review on master by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 04:18:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 04:18:51 +0000 Subject: [Bugs] [Bug 1707227] glusterfsd memory leak after enable tls/ssl In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707227 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22687 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 04:18:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 04:18:53 +0000 Subject: [Bugs] [Bug 1708047] glusterfsd memory leak after enable tls/ssl In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708047 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22687 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 04:18:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 04:18:54 +0000 Subject: [Bugs] [Bug 1708047] glusterfsd memory leak after enable tls/ssl In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708047 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22687 (rpc/socket: After enabling TLS, glusterfsd memory leak found) posted (#2) for review on master by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 04:39:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 04:39:35 +0000 Subject: [Bugs] [Bug 1706683] Enable enable fips-mode-rchecksum for new volumes by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706683 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(asriram at redhat.co |needinfo?(amukherj at redhat.c |m) |om) |needinfo?(ravishankar at redha | |t.com) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 04:56:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 04:56:13 +0000 Subject: [Bugs] [Bug 1708051] New: Capture memory consumption for gluster process at the time of throwing no memory available message Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708051 Bug ID: 1708051 Summary: Capture memory consumption for gluster process at the time of throwing no memory available message Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Capture current memory usage of gluster process at the time of throwing no memory available message Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 04:56:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 04:56:27 +0000 Subject: [Bugs] [Bug 1708051] Capture memory consumption for gluster process at the time of throwing no memory available message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708051 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 04:59:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 04:59:52 +0000 Subject: [Bugs] [Bug 1708051] Capture memory consumption for gluster process at the time of throwing no memory available message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708051 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22688 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 04:59:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 04:59:53 +0000 Subject: [Bugs] [Bug 1708051] Capture memory consumption for gluster process at the time of throwing no memory available message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708051 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22688 (core: Capture process memory usage at the time of call gf_msg_nomem) posted (#2) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 05:17:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:17:29 +0000 Subject: [Bugs] [Bug 1707742] tests/geo-rep: arequal checksum comparison always succeeds In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707742 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-09 05:17:29 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22682 (tests/geo-rep: Fix arequal checksum comparison) merged (#2) on master by Sunny Kumar -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 05:18:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:18:47 +0000 Subject: [Bugs] [Bug 1707686] geo-rep: Always uses rsync even with use_tarssh set to true In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707686 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22683 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 05:18:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:18:48 +0000 Subject: [Bugs] [Bug 1707686] geo-rep: Always uses rsync even with use_tarssh set to true In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707686 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-09 05:18:48 --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22683 (geo-rep: Fix sync-method config) merged (#4) on master by Sunny Kumar -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 05:29:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:29:49 +0000 Subject: [Bugs] [Bug 1708058] New: io-threads xlator doesn't scale threads in some situations Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708058 Bug ID: 1708058 Summary: io-threads xlator doesn't scale threads in some situations Product: GlusterFS Version: mainline Status: NEW Component: io-threads Assignee: bugs at gluster.org Reporter: pkarampu at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: I was working on a .t where two requests are wound one after the other, the first fop that is wound takes more than a second(because of delay-gen) and the next doesn't. This was leading to both the fops taking delay-duration seconds instead of second fop unwinding earlier than the first one. When I debugged it, I found that io-threads is not matching threads to number of fops even when it can. The .t that I am working on for another bz will be updated at https://review.gluster.org/c/glusterfs/+/22674 Version-Release number of selected component (if applicable): How reproducible: Always. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 05:31:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:31:56 +0000 Subject: [Bugs] [Bug 1708058] io-threads xlator doesn't scale threads in some situations In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708058 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22686 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 05:31:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:31:57 +0000 Subject: [Bugs] [Bug 1708058] io-threads xlator doesn't scale threads in some situations In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708058 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22686 (performance/io-threads: Scale threads to match number of fops) posted (#2) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 05:32:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:32:36 +0000 Subject: [Bugs] [Bug 1706683] Enable enable fips-mode-rchecksum for new volumes by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706683 Sweta Anandpara changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sanandpa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 05:44:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:44:32 +0000 Subject: [Bugs] [Bug 1708064] New: [Upgrade] Config files are not upgraded to new version Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708064 Bug ID: 1708064 Summary: [Upgrade] Config files are not upgraded to new version Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: geo-replication Assignee: sunkumar at redhat.com Reporter: khiremat at redhat.com QA Contact: rhinduja at redhat.com CC: avishwan at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1707731 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1707731 +++ Description of problem: Configuration handling was enhanced with patch https://review.gluster.org/#/c/glusterfs/+/18257/, Old configurations are not applied if Geo-rep session is created in the old version and upgraded. Actual results: All configurations reset when upgraded. Expected results: Configuration should be upgraded to the new format when Geo-replication is run for the first time after the upgrade. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1707731 [Bug 1707731] [Upgrade] Config files are not upgraded to new version -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 05:44:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:44:32 +0000 Subject: [Bugs] [Bug 1707731] [Upgrade] Config files are not upgraded to new version In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707731 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1708064 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1708064 [Bug 1708064] [Upgrade] Config files are not upgraded to new version -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 05:44:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:44:36 +0000 Subject: [Bugs] [Bug 1708064] [Upgrade] Config files are not upgraded to new version In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708064 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 05:45:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:45:03 +0000 Subject: [Bugs] [Bug 1708064] [Upgrade] Config files are not upgraded to new version In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708064 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|sunkumar at redhat.com |avishwan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 05:47:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:47:31 +0000 Subject: [Bugs] [Bug 1708067] New: geo-rep: Always uses rsync even with use_tarssh set to true Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708067 Bug ID: 1708067 Summary: geo-rep: Always uses rsync even with use_tarssh set to true Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: geo-replication Assignee: sunkumar at redhat.com Reporter: khiremat at redhat.com QA Contact: rhinduja at redhat.com CC: avishwan at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1707686 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1707686 +++ Description of problem: It always uses rsync to sync data even though use_tarssh is set to true. Version-Release number of selected component (if applicable): mainilne How reproducible: Always Steps to Reproduce: 1. Setup geo-rep between two gluster volumes and start it 2. Set use_tarssh to true 3. Write a huge file on master 4. ps -ef | egrep "tar|rsync" while the big file is syncing to slave. It show rsync process instead of tar over ssh Actual results: use_tarssh has not effect on sync-engine. It's always using rsync. Expected results: use_tarssh should use tarssh and not rsync Additional info: --- Additional comment from Worker Ant on 2019-05-09 05:18:48 UTC --- REVIEW: https://review.gluster.org/22683 (geo-rep: Fix sync-method config) merged (#4) on master by Sunny Kumar Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1707686 [Bug 1707686] geo-rep: Always uses rsync even with use_tarssh set to true -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 05:47:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:47:31 +0000 Subject: [Bugs] [Bug 1707686] geo-rep: Always uses rsync even with use_tarssh set to true In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707686 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1708067 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1708067 [Bug 1708067] geo-rep: Always uses rsync even with use_tarssh set to true -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 05:47:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:47:32 +0000 Subject: [Bugs] [Bug 1708067] geo-rep: Always uses rsync even with use_tarssh set to true In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708067 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 05:48:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:48:01 +0000 Subject: [Bugs] [Bug 1708067] geo-rep: Always uses rsync even with use_tarssh set to true In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708067 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|sunkumar at redhat.com |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 05:48:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 05:48:21 +0000 Subject: [Bugs] [Bug 1708067] geo-rep: Always uses rsync even with use_tarssh set to true In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708067 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 06:18:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 06:18:24 +0000 Subject: [Bugs] [Bug 1668989] Unable to delete directories that contain linkto files that point to itself. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1668989 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1678183, 1696806 CC| |olim at redhat.com --- Comment #11 from Nithya Balachandran --- *** Bug 1667556 has been marked as a duplicate of this bug. *** Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1678183 [Bug 1678183] Tracker BZ : rm -rf issues -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 07:02:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 07:02:29 +0000 Subject: [Bugs] [Bug 1652887] Geo-rep help looks to have a typo. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1652887 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22689 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 07:02:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 07:02:30 +0000 Subject: [Bugs] [Bug 1652887] Geo-rep help looks to have a typo. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1652887 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|CURRENTRELEASE |--- Keywords| |Reopened --- Comment #6 from Worker Ant --- REVIEW: https://review.gluster.org/22689 (geo-rep: Geo-rep help text issue) posted (#1) for review on master by Shwetha K Acharya -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 07:24:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 07:24:25 +0000 Subject: [Bugs] [Bug 1652887] Geo-rep help looks to have a typo. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1652887 Shwetha K Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |ASSIGNED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 07:39:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 07:39:25 +0000 Subject: [Bugs] [Bug 1708116] New: geo-rep: Sync hangs with tarssh as sync-engine Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708116 Bug ID: 1708116 Summary: geo-rep: Sync hangs with tarssh as sync-engine Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: geo-replication Assignee: sunkumar at redhat.com Reporter: khiremat at redhat.com QA Contact: rhinduja at redhat.com CC: avishwan at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1707728 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1707728 +++ Description of problem: When the heavy workload as below on master, the sync is hung with sync engine tarssh. It's working fine with rsync as sync engine. for i in {1..10000} do echo "sample data" > //file$i mv -f //file$i / 3. Start geo-rep and wait till the status is changelog crawl 4. Configure sync-jobs to 1 gluster vol geo-rep :: config sync-jobs 1 5. Configure sync engine to tarssh gluster vol geo-rep :: config sync-method tarssh 6. Stop the geo-rep 7. Do the I/O on mastermnt as mentioned for i in {1..10000} do echo "sample data" > //file$i mv -f //file$i / References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707728 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1708116 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1708116 [Bug 1708116] geo-rep: Sync hangs with tarssh as sync-engine -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 07:39:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 07:39:26 +0000 Subject: [Bugs] [Bug 1708116] geo-rep: Sync hangs with tarssh as sync-engine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708116 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 07:39:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 07:39:59 +0000 Subject: [Bugs] [Bug 1708116] geo-rep: Sync hangs with tarssh as sync-engine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708116 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|sunkumar at redhat.com |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 07:40:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 07:40:34 +0000 Subject: [Bugs] [Bug 1708116] geo-rep: Sync hangs with tarssh as sync-engine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708116 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 07:55:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 07:55:53 +0000 Subject: [Bugs] [Bug 1694820] Geo-rep: Data inconsistency while syncing heavy renames with constant destination name In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694820 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |khiremat at redhat.com Summary|Issue in heavy rename |Geo-rep: Data inconsistency |workload |while syncing heavy renames | |with constant destination | |name -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 07:59:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 07:59:23 +0000 Subject: [Bugs] [Bug 1708121] New: Geo-rep: Data inconsistency while syncing heavy renames with constant destination name Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708121 Bug ID: 1708121 Summary: Geo-rep: Data inconsistency while syncing heavy renames with constant destination name Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: geo-replication Assignee: sunkumar at redhat.com Reporter: khiremat at redhat.com QA Contact: rhinduja at redhat.com CC: avishwan at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, sunkumar at redhat.com Depends On: 1694820 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1694820 +++ Description of problem: This problem only exists in heavy RENAME workload where parallel rename are frequent or doing RENAME with existing destination. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Run frequent RENAME on master mount and check for sync in slave. Ex - while true; do uuid="`uuidgen`"; echo "some data" > "test$uuid"; mv "test$uuid" "test" -f; done Actual results: Does not syncs renames properly and creates multiples files in slave. Expected results: Should sync renames. --- Additional comment from Worker Ant on 2019-04-01 19:10:45 UTC --- REVIEW: https://review.gluster.org/22474 (geo-rep: fix rename with existing gfid) posted (#1) for review on master by Sunny Kumar --- Additional comment from Worker Ant on 2019-04-07 05:23:33 UTC --- REVIEW: https://review.gluster.org/22519 (geo-rep: Fix rename with existing destination with same gfid) posted (#1) for review on master by Kotresh HR --- Additional comment from Worker Ant on 2019-04-26 07:15:41 UTC --- REVIEW: https://review.gluster.org/22519 (geo-rep: Fix rename with existing destination with same gfid) merged (#7) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1694820 [Bug 1694820] Geo-rep: Data inconsistency while syncing heavy renames with constant destination name -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 07:59:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 07:59:23 +0000 Subject: [Bugs] [Bug 1694820] Geo-rep: Data inconsistency while syncing heavy renames with constant destination name In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694820 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1708121 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1708121 [Bug 1708121] Geo-rep: Data inconsistency while syncing heavy renames with constant destination name -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 07:59:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 07:59:28 +0000 Subject: [Bugs] [Bug 1708121] Geo-rep: Data inconsistency while syncing heavy renames with constant destination name In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708121 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 07:59:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 07:59:53 +0000 Subject: [Bugs] [Bug 1708121] Geo-rep: Data inconsistency while syncing heavy renames with constant destination name In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708121 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|sunkumar at redhat.com |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 08:00:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 08:00:22 +0000 Subject: [Bugs] [Bug 1708121] Geo-rep: Data inconsistency while syncing heavy renames with constant destination name In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708121 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 09:04:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:04:57 +0000 Subject: [Bugs] [Bug 1708156] New: ec ignores lock contention notifications for partially acquired locks Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708156 Bug ID: 1708156 Summary: ec ignores lock contention notifications for partially acquired locks Product: GlusterFS Version: mainline Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: jahernan at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: When an inodelk is being acquired, it could happen that some bricks have already granted the lock while others don't. From the point of view of ec, the lock is not yet acquired. If at this point one of the bricks that has already granted the lock receives another inodelk request, it will send a contention notification to ec. Currently ec ignores those notifications until the lock is fully acquired. This means than once ec acquires the lock on all bricks, it won't be released immediately when eager-lock is used. Version-Release number of selected component (if applicable): mainline How reproducible: Very frequently when there are multiple concurrent operations on same directory Steps to Reproduce: 1. Create a disperse volume 2. Mount it from several clients 3. Create few files on a directory 4. Do 'ls' of that directory at the same time from all clients Actual results: Some 'ls' take several seconds to complete Expected results: All 'ls' should complete in less than a second Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 09:05:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:05:50 +0000 Subject: [Bugs] [Bug 1708156] ec ignores lock contention notifications for partially acquired locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708156 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |jahernan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 09:19:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:19:59 +0000 Subject: [Bugs] [Bug 1708163] New: tests: fix bug-1319374.c compile warnings. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708163 Bug ID: 1708163 Summary: tests: fix bug-1319374.c compile warnings. Product: GlusterFS Version: mainline Status: NEW Component: tests Assignee: bugs at gluster.org Reporter: ravishankar at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: tests: fix bug-1319374.c compile warnings. I was looking at a downstream failure of bug-1319374-THIS-crash.t when I saw the compiler was throwing a warning while running the test: tests/bugs/gfapi/bug-1319374.c:17:61: warning: implicit declaration of function ?strerror?; did you mean ?perror?? [-Wimplicit-function-declaration] fprintf(stderr, "\nglfs_new: returned NULL (%s)\n", strerror(errno)); ^~~~~~~~ perror So I compiled the .c with -Wall and saw a lot many more warnings, all due of a missing header. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 09:21:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:21:14 +0000 Subject: [Bugs] [Bug 1708163] tests: fix bug-1319374.c compile warnings. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708163 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Status|NEW |ASSIGNED Assignee|bugs at gluster.org |ravishankar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 09:21:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:21:14 +0000 Subject: [Bugs] [Bug 1708156] ec ignores lock contention notifications for partially acquired locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708156 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22690 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 09:21:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:21:15 +0000 Subject: [Bugs] [Bug 1708156] ec ignores lock contention notifications for partially acquired locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708156 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22690 (cluster/ec: honor contention notifications for partially acquired locks) posted (#1) for review on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 09:21:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:21:27 +0000 Subject: [Bugs] [Bug 1690753] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690753 Vishal Pandey changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |vpandey at redhat.com Assignee|risjain at redhat.com |vpandey at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 09:23:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:23:08 +0000 Subject: [Bugs] [Bug 1708163] tests: fix bug-1319374.c compile warnings. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708163 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22691 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 09:23:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:23:09 +0000 Subject: [Bugs] [Bug 1708163] tests: fix bug-1319374.c compile warnings. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708163 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22691 (tests: fix bug-1319374.c compile warnings.) posted (#1) for review on master by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 09:24:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:24:17 +0000 Subject: [Bugs] [Bug 1690753] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690753 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22692 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 09:24:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:24:18 +0000 Subject: [Bugs] [Bug 1690753] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690753 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22692 (glusterd: Add gluster volume stop operation to glusterd_validate_quorum()) posted (#1) for review on master by Vishal Pandey -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 09:25:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:25:11 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #653 from Worker Ant --- REVIEW: https://review.gluster.org/22623 (tests: improve and fix some test scripts) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 09:51:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:51:55 +0000 Subject: [Bugs] [Bug 1265308] Distribution cannot be prioritized when removing a brick In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1265308 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-09 09:51:55 --- Comment #5 from Amar Tumballi --- We wouldn't be working on this feature in the near term, and hence will mark it as DEFERRED. We will revisit this after couple of more releases. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 09:54:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:54:12 +0000 Subject: [Bugs] [Bug 1276638] write serialization should be guaranteed for posix mandatory locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1276638 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DEFERRED Last Closed| |2019-05-09 09:54:12 --- Comment #1 from Amar Tumballi --- Thanks for the report, and considering the rarity of the bug, and as we had not seen this in last 3+ years, marking it as DEFERRED. We will revisit and pick this if we get time resource/time later. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 09:55:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:55:35 +0000 Subject: [Bugs] [Bug 1277054] peer probe using IPv6 not working In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1277054 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |WORKSFORME Last Closed| |2019-05-09 09:55:35 --- Comment #1 from Amar Tumballi --- Please try version 6.x and we have fixed some of the ipv6 issues in cli/glusterd and in other places. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 09:57:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 09:57:16 +0000 Subject: [Bugs] [Bug 1287099] Race between mandatory lock request and ongoing read/write In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1287099 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-09 09:57:16 --- Comment #1 from Amar Tumballi --- Considering we had no reported bug about this in last 3 years, would prefer to mark it DEFERRED, and revisit this depending on time/resource after couple of releases. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 10:00:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 10:00:20 +0000 Subject: [Bugs] [Bug 1288227] samba gluster vfs - client can't follow symlinks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1288227 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com, | |vdas at redhat.com Resolution|--- |WORKSFORME Last Closed| |2019-05-09 10:00:20 --- Comment #2 from Amar Tumballi --- Please try with glusterfs-6.x and reopen this if it is still happening. We have not seen this bug in a while now. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 10:01:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 10:01:34 +0000 Subject: [Bugs] [Bug 1289442] high memory usage on client node In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1289442 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-09 10:01:34 --- Comment #7 from Amar Tumballi --- With fixing https://bugzilla.redhat.com/show_bug.cgi?id=1560969 we find that these issues are now fixed. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 10:01:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 10:01:35 +0000 Subject: [Bugs] [Bug 1371544] high memory usage on client node In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1371544 Bug 1371544 depends on bug 1289442, which changed state. Bug 1289442 Summary: high memory usage on client node https://bugzilla.redhat.com/show_bug.cgi?id=1289442 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WORKSFORME -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 10:01:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 10:01:35 +0000 Subject: [Bugs] [Bug 1371547] high memory usage on client node In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1371547 Bug 1371547 depends on bug 1289442, which changed state. Bug 1289442 Summary: high memory usage on client node https://bugzilla.redhat.com/show_bug.cgi?id=1289442 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WORKSFORME -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 10:01:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 10:01:53 +0000 Subject: [Bugs] [Bug 1288227] samba gluster vfs - client can't follow symlinks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1288227 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.x -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 10:03:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 10:03:20 +0000 Subject: [Bugs] [Bug 1294547] The --xml flag should always output xml In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1294547 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed|2017-03-08 10:48:49 |2019-05-09 10:03:20 --- Comment #3 from Amar Tumballi --- Not much focus on --xml output at present. Will keep it as DEFERRED, and revisit it based on time/resource. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 10:04:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 10:04:37 +0000 Subject: [Bugs] [Bug 1301805] Contending exclusive NFS file locks from two hosts breaks locking when blocked host gives up early. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1301805 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|medium |low CC| |atumball at redhat.com, | |jthottan at redhat.com, | |spalai at redhat.com Severity|medium |low -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 10:06:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 10:06:17 +0000 Subject: [Bugs] [Bug 1302284] Offline Bricks are starting after probing new node In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1302284 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-09 10:06:17 --- Comment #2 from Amar Tumballi --- We will mark this as DEFERRED as we are not working on this. Will revisit this based on time and resource after couple of releases. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 10:09:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 10:09:05 +0000 Subject: [Bugs] [Bug 1304465] dnscache in libglusterfs returns 127.0.0.1 for 1st non-localhost request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1304465 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |rkothiya at redhat.com Severity|medium |low --- Comment #1 from Amar Tumballi --- Can we see if this is the behavior with glusterfs-6.x and take appropriate action? I have not heard about this issue in recent time, but would validate it before closing this. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 10:10:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 10:10:32 +0000 Subject: [Bugs] [Bug 1321921] auth.allow option with negation ! (!192.168.*.*) should not allow !192.168.*.* address In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1321921 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DEFERRED Last Closed| |2019-05-09 10:10:32 --- Comment #5 from Amar Tumballi --- We are not picking this issue in upcoming releases, and hence marking as DEFERRED. Will revisit after couple of releases depending on time/resource. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 10:13:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 10:13:55 +0000 Subject: [Bugs] [Bug 1336513] changelog: compiler warning format string In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1336513 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com Severity|unspecified |high --- Comment #3 from Amar Tumballi --- The original issue reported in string format is now fixed as part of our 32bit/64bit compile options. But as Yaniv said in comment#2, there are other warnings with latest gcc. Recommend running compile with fedora 30 (gcc 8.3.x and above), and fixing all bugs. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 10:15:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 10:15:14 +0000 Subject: [Bugs] [Bug 1339145] "No such file of directory" error on deleting the nested directory structure. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1339145 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-09 10:15:14 --- Comment #1 from Amar Tumballi --- We are not picking this issue in upcoming releases, and hence marking as DEFERRED. Will revisit after couple of releases depending on time/resource. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 10:16:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 10:16:43 +0000 Subject: [Bugs] [Bug 1341429] successful mkdir from "bad" subvolume should be ignored while propagating result to higher layer In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1341429 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-09 10:16:43 --- Comment #2 from Amar Tumballi --- DHT now has entrylk and hence this can be treated as done. (glusterfs-6.x) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 10:16:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 10:16:43 +0000 Subject: [Bugs] [Bug 1341435] successful mkdir from "bad" subvolume should be ignored while propagating result to higher layer In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1341435 Bug 1341435 depends on bug 1341429, which changed state. Bug 1341429 Summary: successful mkdir from "bad" subvolume should be ignored while propagating result to higher layer https://bugzilla.redhat.com/show_bug.cgi?id=1341429 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WORKSFORME -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 11:12:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 11:12:47 +0000 Subject: [Bugs] [Bug 1463192] gfapi: discard glfs object when volume is deleted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1463192 Prasanna Kumar Kalever changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(prasanna.kalever@ | |redhat.com) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 11:14:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 11:14:28 +0000 Subject: [Bugs] [Bug 1369811] [RFE] gluster volume peer down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1369811 Prasanna Kumar Kalever changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(prasanna.kalever@ | |redhat.com) | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 11:15:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 11:15:44 +0000 Subject: [Bugs] [Bug 1702299] Custom xattrs are not healed on newly added brick In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702299 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 12:11:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 12:11:52 +0000 Subject: [Bugs] [Bug 1707393] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707393 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22693 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 12:11:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 12:11:53 +0000 Subject: [Bugs] [Bug 1707393] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707393 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22693 (cluster/dht: Refactor dht lookup functions) posted (#1) for review on release-6 by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 12:31:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 12:31:47 +0000 Subject: [Bugs] [Bug 1346170] Nested directory creation performance degrade after 100+ iterations 3*3 vol In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1346170 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Performance CC| |atumball at redhat.com, | |jahernan at redhat.com --- Comment #3 from Amar Tumballi --- We have noticed that backend brick performance matters (and amplifies the performance issues) a lot for how gluster performs. Considering in this test, we are ending up creating fragments in backend filesystem, I see that this may be expected behavior. I recommend running the tests with latest glusterfs-6.x releases, and checking what is the behavior. If not happening, close it as WORKSFORME. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 13:17:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 13:17:53 +0000 Subject: [Bugs] [Bug 1708257] New: Grant additional maintainers merge rights on release branches Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708257 Bug ID: 1708257 Summary: Grant additional maintainers merge rights on release branches Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: srangana at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org, hgowtham at redhat.com, rkothiya at redhat.com, sunkumar at redhat.com Target Milestone: --- Classification: Community Going forward the following owners would be managing the minor release branches and require merge rights for the same, - Hari Gowtham - Rinku Kothiya - Sunny Kumar I am marking this bug NEEDINFO from each of the above users, for them to provide their github username and also to ensure that 2FA is setup on their github accounts before permissions are granted to them. Branches that they need merge rights to are: release-4.1 release-5 release-6 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 13:18:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 13:18:57 +0000 Subject: [Bugs] [Bug 1708257] Grant additional maintainers merge rights on release branches In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708257 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(hgowtham at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 13:19:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 13:19:10 +0000 Subject: [Bugs] [Bug 1708257] Grant additional maintainers merge rights on release branches In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708257 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(sunkumar at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 13:19:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 13:19:25 +0000 Subject: [Bugs] [Bug 1708257] Grant additional maintainers merge rights on release branches In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708257 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(rkothiya at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 13:49:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 13:49:43 +0000 Subject: [Bugs] [Bug 1708257] Grant additional maintainers merge rights on release branches In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708257 hari gowtham changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(hgowtham at redhat.c | |om) | |needinfo?(sunkumar at redhat.c | |om) | |needinfo?(rkothiya at redhat.c | |om) | --- Comment #1 from hari gowtham --- hgowtham's username: harigowtham and 2FA has been activated for github. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 14:00:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 14:00:20 +0000 Subject: [Bugs] [Bug 1706893] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706893 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 14:17:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 14:17:28 +0000 Subject: [Bugs] [Bug 1690753] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690753 --- Comment #3 from Vishal Pandey --- RCA - The issue was caused because from 3.5.0 onwards volume stop is using mgmt_v3 and mgmt_v3 only has quorum_validation for "volume profile", now some of the other volume commands do use mgmt_v3 as well but they seem to have a quorum validation a bit later in there execution lifecycle, there is no such check in "volume stop". Therefore, volume stop command was successful in execution even when quorum was not met. Fixed in upstream by - https://review.gluster.org/#/c/glusterfs/+/22692/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 19:37:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 19:37:08 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #654 from Worker Ant --- REVIEW: https://review.gluster.org/22659 (glusterd: reduce some work in glusterd-utils.c) merged (#5) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 19:37:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 19:37:45 +0000 Subject: [Bugs] [Bug 1707700] maintain consistent values across for options when fetched at cluster level or volume level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707700 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-09 19:37:45 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22680 (glusterd: fix inconsistent global option output in volume get) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 19:52:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 19:52:19 +0000 Subject: [Bugs] [Bug 1348071] Change backup process so we only backups in a specific pattern In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1348071 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |WORKSFORME Last Closed| |2019-05-09 19:52:19 --- Comment #1 from Amar Tumballi --- This is mostly done now with each job having a limit on how long it stores data. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 19:53:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 19:53:35 +0000 Subject: [Bugs] [Bug 1348072] Backups for Gerrit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1348072 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com Severity|unspecified |high --- Comment #4 from Amar Tumballi --- Any idea if this is done? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 19:55:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 19:55:56 +0000 Subject: [Bugs] [Bug 1349792] FutureFeature: Enhance Volume status to account for brick state and lifecycle operations (grow and shrink) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1349792 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-09 19:55:56 --- Comment #1 from Amar Tumballi --- We will not be working on this anytime soon, hence marking it DEFERRED. Will revisit this after couple of releases. -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 19:58:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 19:58:51 +0000 Subject: [Bugs] [Bug 1350238] Vagrant environment for tests should configure DNS for VMs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1350238 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |WORKSFORME Last Closed| |2019-05-09 19:58:51 --- Comment #4 from Amar Tumballi --- These things are now fixed as we work with regression scripts properly in recent times (ie, haven't seen issues like this in a long time). Please feel free to reopen if still an issue. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:00:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:00:36 +0000 Subject: [Bugs] [Bug 1350365] Sharding may create shards beyond it's size In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1350365 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-09 20:00:36 --- Comment #2 from Amar Tumballi --- Sharding has been working fine in Hyperconverged setup for a long time now. Marking it as WORKSFORME. Reopen if this is still an issue. -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:00:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:00:37 +0000 Subject: [Bugs] [Bug 1350407] Sharding may create shards beyond it's size In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1350407 Bug 1350407 depends on bug 1350365, which changed state. Bug 1350365 Summary: Sharding may create shards beyond it's size https://bugzilla.redhat.com/show_bug.cgi?id=1350365 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WORKSFORME -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:03:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:03:11 +0000 Subject: [Bugs] [Bug 1350477] Test to check if the maintainer reviewed the patch In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1350477 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |WONTFIX Last Closed| |2019-05-09 20:03:11 --- Comment #13 from Amar Tumballi --- We are not going through this now. When the bug was raised, we had no commit right to relevant maintainers. Now, everyone who is a maintainer gets their access to merge, so this particular feature is not required. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:04:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:04:18 +0000 Subject: [Bugs] [Bug 1356079] Introduce common transaction framework as an alternative to lock-acquisition by individual translators on client stack In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1356079 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |UPSTREAM Last Closed| |2019-05-09 20:04:18 --- Comment #1 from Amar Tumballi --- This feature is moved to https://github.com/gluster/glusterfs/issues/342 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:07:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:07:05 +0000 Subject: [Bugs] [Bug 1369349] enable trash, then truncate a large file lead to glusterfsd segfault In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1369349 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-09 20:07:05 --- Comment #10 from Amar Tumballi --- As there are work around which exists to get over the issue, would like to fix the issue later (if we get to it). For now, CLOSING with DEFERRED. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 20:07:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:07:53 +0000 Subject: [Bugs] [Bug 1370921] Improve robustness by checking result of pthread_mutex_lock() In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1370921 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |EasyFix Priority|unspecified |low CC| |atumball at redhat.com Severity|medium |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:09:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:09:43 +0000 Subject: [Bugs] [Bug 1371633] mop off the glusterfs firewall service In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1371633 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low CC| |atumball at redhat.com Flags| |needinfo?(prasanna.kalever@ | |redhat.com) Severity|unspecified |low --- Comment #2 from Amar Tumballi --- Is it required even now? Why was no one bothered about in years? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:11:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:11:07 +0000 Subject: [Bugs] [Bug 1376858] crypt xlator should use linker and compile options from pkg-config instaed of "-lssl -lcrypo" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1376858 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |WONTFIX Last Closed| |2019-05-09 20:11:07 --- Comment #2 from Amar Tumballi --- Removed Crypt xlator from the codebase as it was not actively maintained. (>=glusterf-6.0) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:12:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:12:35 +0000 Subject: [Bugs] [Bug 1376859] cdc xlator should use linker and compile options from pkg-config instead of "-fPIC" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1376859 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |WONTFIX Last Closed| |2019-05-09 20:12:35 --- Comment #1 from Amar Tumballi --- Removed CDC xlator from the codebase as it was not actively maintained. (>=glusterf-6.0) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:15:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:15:36 +0000 Subject: [Bugs] [Bug 1379544] glusterd creates all rpc_clnts with the same name In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1379544 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |NOTABUG Last Closed| |2019-05-09 20:15:36 --- Comment #3 from Amar Tumballi --- This bug is not valid, as what we are logging with 'this->peerinfo.identifier' is what matters here. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:18:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:18:06 +0000 Subject: [Bugs] [Bug 1379982] Improve documentation of new public gfapi/upcall functions In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1379982 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|high |medium QA Contact|sdharane at redhat.com | Severity|high |medium -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:19:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:19:28 +0000 Subject: [Bugs] [Bug 1385175] systemd service fails if quorum not established before timeout In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1385175 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium CC| |atumball at redhat.com, | |hgowtham at redhat.com, | |rkothiya at redhat.com, | |sunkumar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:21:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:21:41 +0000 Subject: [Bugs] [Bug 1385794] io-throttling: Calculate moving averages and throttle offending hosts In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1385794 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DEFERRED Last Closed| |2019-05-09 20:21:41 --- Comment #4 from Amar Tumballi --- Valid issue. With introduction of global thread pooling, and other focuses, keeping it under DEFERRED, and will revisit after couple of releases. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:25:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:25:33 +0000 Subject: [Bugs] [Bug 1387404] geo-rep: gsync-sync-gfid binary installed in /usr/share/... In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1387404 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low CC| |atumball at redhat.com, | |hgowtham at redhat.com, | |rkothiya at redhat.com, | |sunkumar at redhat.com Component|geo-replication |build Severity|unspecified |low --- Comment #1 from Amar Tumballi --- ``` glusterfs.spec.in:1205: %{_datadir}/glusterfs/scripts/gsync-sync-gfid ``` The above line is the reason for this rpmlint error, and we should consider changing it to different location if that is the norm. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:27:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:27:27 +0000 Subject: [Bugs] [Bug 1396341] Split-brain directories that only differ by trusted.dht should automatically fix-layout In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1396341 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Severity|unspecified |low Last Closed| |2019-05-09 20:27:27 --- Comment #1 from Amar Tumballi --- Considering this is not done in last 2yrs, and not in total focus for next releases, marking it as DEFERRED, so we can revisit them after couple of releases. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 20:30:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:30:27 +0000 Subject: [Bugs] [Bug 1404654] io-stats miss statistics when fh is not newly created In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1404654 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |WONTFIX Last Closed| |2019-05-09 20:30:27 --- Comment #2 from Amar Tumballi --- dengjin, it is complex with anon-fd feature of gluster (specially used for gNFS), and hence i would currently mark it as WONTFIX. We will revisit it when we decide to make gNFS fully supported component. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:31:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:31:56 +0000 Subject: [Bugs] [Bug 1408101] Fix potential socket_poller thread deadlock and resource leak In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1408101 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com --- Comment #1 from Amar Tumballi --- Is this still relevant? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 20:34:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:34:39 +0000 Subject: [Bugs] [Bug 1408784] Failed to build on MacOSX In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1408784 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DUPLICATE Last Closed| |2019-05-09 20:34:39 --- Comment #1 from Amar Tumballi --- *** This bug has been marked as a duplicate of bug 1155181 *** -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:34:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:34:39 +0000 Subject: [Bugs] [Bug 1155181] Lots of compilation warnings on OSX. We should probably fix them. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1155181 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amarts at gmail.com --- Comment #24 from Amar Tumballi --- *** Bug 1408784 has been marked as a duplicate of this bug. *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 20:50:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:50:08 +0000 Subject: [Bugs] [Bug 1409767] The API glfs_h_poll_upcall cannot return 0 and set an errno to denote no entries In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1409767 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |EasyFix Priority|unspecified |low CC| |atumball at redhat.com QA Contact|sdharane at redhat.com | Severity|unspecified |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 9 20:52:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 20:52:09 +0000 Subject: [Bugs] [Bug 1410100] Package arequal-checksum for broader community use In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1410100 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |EasyFix Priority|unspecified |low CC| |atumball at redhat.com Severity|unspecified |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 05:28:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 05:28:15 +0000 Subject: [Bugs] [Bug 1708257] Grant additional maintainers merge rights on release branches In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708257 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(sunkumar at redhat.c |needinfo?(rkothiya at redhat.c |om) |om) |needinfo?(rkothiya at redhat.c | |om) | --- Comment #3 from Sunny Kumar --- Hi Sunny's Username: sunnyku and 2FA is enabled for this account. -Sunny -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 06:41:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 06:41:07 +0000 Subject: [Bugs] [Bug 1708505] New: [EC] /tests/basic/ec/ec-data-heal.t is failing as heal is not happening properly Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708505 Bug ID: 1708505 Summary: [EC] /tests/basic/ec/ec-data-heal.t is failing as heal is not happening properly Product: GlusterFS Version: mainline Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: aspandey at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: /tests/basic/ec/ec-data-heal.t is failing as heal is not happening properly Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 07:00:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 07:00:38 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22700 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 07:00:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 07:00:39 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #655 from Worker Ant --- REVIEW: https://review.gluster.org/22700 (libglusterfs: Remove decompunder helper routines from symbol export) posted (#1) for review on master by Anoop C S -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 10:30:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 10:30:07 +0000 Subject: [Bugs] [Bug 1708603] New: [geo-rep]: Note section in document is required for ignore_deletes true config option where it might delete a file Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708603 Bug ID: 1708603 Summary: [geo-rep]: Note section in document is required for ignore_deletes true config option where it might delete a file Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: geo-replication Keywords: EasyFix, ZStream Severity: low Priority: low Assignee: bugs at gluster.org Reporter: sacharya at redhat.com CC: avishwan at redhat.com, bugs at gluster.org, chrisw at redhat.com, csaba at redhat.com, nlevinki at redhat.com, rhinduja at redhat.com, sacharya at redhat.com, storage-qa-internal at redhat.com Depends On: 1224906 Blocks: 1223636 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1223636 [Bug 1223636] 3.1 QE Tracker https://bugzilla.redhat.com/show_bug.cgi?id=1224906 [Bug 1224906] [geo-rep]: Note section in document is required for ignore_deletes true config option where it might delete a file -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 10:37:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 10:37:32 +0000 Subject: [Bugs] [Bug 1708603] [geo-rep]: Note section in document is required for ignore_deletes true config option where it might delete a file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708603 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22702 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 10:37:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 10:37:34 +0000 Subject: [Bugs] [Bug 1708603] [geo-rep]: Note section in document is required for ignore_deletes true config option where it might delete a file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708603 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22702 (geo-rep: Note section in document is required for ignore_deletes) posted (#1) for review on master by Shwetha K Acharya -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 10:48:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 10:48:05 +0000 Subject: [Bugs] [Bug 1708603] [geo-rep]: Note section in document is required for ignore_deletes true config option where it might delete a file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708603 Shwetha K Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |sacharya at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 12:11:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 12:11:56 +0000 Subject: [Bugs] [Bug 1698449] thin-arbiter lock release fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698449 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-10 12:11:56 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22543 (afr: thin-arbiter lock release fixes) merged (#8) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 10 12:37:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 12:37:49 +0000 Subject: [Bugs] [Bug 1414608] Weird directory appear when rmdir the directory in disk full condition In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1414608 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com --- Comment #5 from Amar Tumballi --- is this still an isuue? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 10 12:39:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 12:39:18 +0000 Subject: [Bugs] [Bug 1417535] rebalance operation because of remove-brick failed on one of the cluster node In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1417535 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium CC| |atumball at redhat.com --- Comment #3 from Amar Tumballi --- Ashish, did we finally fix this? Whats the latest on this? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 12:45:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 12:45:16 +0000 Subject: [Bugs] [Bug 1420027] Cannot add-brick with servers in containers. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1420027 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed|2017-03-08 12:33:55 |2019-05-10 12:45:16 --- Comment #2 from Amar Tumballi --- We are not working on getting this working in next releases. Will mark it as DEFERRED, and will revisit after couple of releases. With latest containers, if you are running it in --privilege mode, we see that the mounts work there. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 10 12:51:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 12:51:51 +0000 Subject: [Bugs] [Bug 1423442] group files to set volume options should have comments In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1423442 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |EasyFix, StudentProject CC| |atumball at redhat.com --- Comment #2 from Amar Tumballi --- This can be done by making some changes in the way we read profile files. If we start considering json/yaml model for reading profile file. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 12:52:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 12:52:42 +0000 Subject: [Bugs] [Bug 1426601] Allow to set dynamic library path from env variable In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1426601 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |INSUFFICIENT_DATA Last Closed| |2019-05-10 12:52:42 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 10 12:54:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 12:54:11 +0000 Subject: [Bugs] [Bug 1428047] Require a Jenkins job to validate Change-ID on commits to branches in glusterfs repository In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428047 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com --- Comment #19 from Amar Tumballi --- I guess for now, the work we have done through ./rfc.sh is good. Prefer to close as WORKSFORME. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 12:55:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 12:55:32 +0000 Subject: [Bugs] [Bug 1428052] performance/io-threads: Eliminate spinlock contention via fops-per-thread-ratio In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428052 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-10 12:55:32 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 10 12:57:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 12:57:30 +0000 Subject: [Bugs] [Bug 1428059] performance/md-cache: Add an option to cache all xattrs for an inode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428059 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low CC| |atumball at redhat.com Severity|unspecified |low -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 10 12:59:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 12:59:00 +0000 Subject: [Bugs] [Bug 1428066] debug/io-stats: Track path of operations in FOP samples In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428066 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Severity|unspecified |low Last Closed| |2019-05-10 12:59:00 --- Comment #1 from Amar Tumballi --- No activity on this in last 2yrs. Will revisit after couple of releases. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:06:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:06:11 +0000 Subject: [Bugs] [Bug 1434332] crash from write-behind In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1434332 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com Severity|unspecified |high --- Comment #2 from Amar Tumballi --- Have not seen it in a long time. But would be good to check this out with latest master and close it. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 10 13:07:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:07:38 +0000 Subject: [Bugs] [Bug 1437477] .trashcan doesn't show deleted files in distributed replicated cluster In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1437477 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-10 13:07:38 --- Comment #1 from Amar Tumballi --- as it is a known-issue, we are not planning to fix it in currently planned releases. Will revisit after couple of releases. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:10:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:10:06 +0000 Subject: [Bugs] [Bug 1458719] download.gluster.org occasionally has the wrong permissions causing problems for users In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1458719 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WORKSFORME Last Closed| |2019-05-10 13:10:06 --- Comment #2 from Amar Tumballi --- Not seen in a long time now. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:16:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:16:03 +0000 Subject: [Bugs] [Bug 1431199] Request to automate closing github PRs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1431199 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |NEW Severity|unspecified |medium -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:16:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:16:38 +0000 Subject: [Bugs] [Bug 1439706] Change default name in gerrit patch In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1439706 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW CC| |atumball at redhat.com Severity|unspecified |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:17:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:17:47 +0000 Subject: [Bugs] [Bug 1463191] gfapi: discard glfs object when volume is deleted In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1463191 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW CC| |atumball at redhat.com Severity|unspecified |low -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:20:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:20:54 +0000 Subject: [Bugs] [Bug 1557127] github issue update on spec commits In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1557127 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|high |low Status|ASSIGNED |NEW Severity|high |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:20:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:20:54 +0000 Subject: [Bugs] [Bug 1564451] The abandon job for patches should post info in bugzilla that some patch is abandon'd. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1564451 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW Severity|high |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:20:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:20:55 +0000 Subject: [Bugs] [Bug 1489325] Place to host gerritstats In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1489325 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|medium |low Status|ASSIGNED |NEW Severity|high |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:20:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:20:55 +0000 Subject: [Bugs] [Bug 1584998] Need automatic inclusion of few reviewers to a given patch In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1584998 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW Severity|high |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:20:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:20:56 +0000 Subject: [Bugs] [Bug 1357421] Fail smoke tests if cherry-picked bugs contain the old git-tags In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1357421 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW Severity|unspecified |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:20:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:20:56 +0000 Subject: [Bugs] [Bug 1672656] glustereventsd: crash, ABRT report for package glusterfs has reached 100 occurrences In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672656 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|medium |low Status|ASSIGNED |NEW Severity|high |low -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:20:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:20:56 +0000 Subject: [Bugs] [Bug 1631390] Run smoke and regression on a patch only after passing clang-format job In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1631390 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW Severity|medium |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:20:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:20:57 +0000 Subject: [Bugs] [Bug 1584992] Need python pep8 and other relevant tests in smoke if a patch includes any python file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1584992 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW Severity|high |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:20:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:20:57 +0000 Subject: [Bugs] [Bug 1657584] Re-enable TSAN jobs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657584 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW Severity|unspecified |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:20:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:20:58 +0000 Subject: [Bugs] [Bug 1562670] Run libgfapi-python tests on Gerrit against glusterfs changes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1562670 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW Severity|unspecified |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:20:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:20:58 +0000 Subject: [Bugs] [Bug 1670382] parallel-readdir prevents directories and files listing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670382 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW Severity|high |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:20:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:20:59 +0000 Subject: [Bugs] [Bug 1564130] need option 'cherry-pick to release-x.y' in reviews In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1564130 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW Severity|high |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:20:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:20:59 +0000 Subject: [Bugs] [Bug 1620377] Coverity scan setup for gluster-block and related projects In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1620377 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW Severity|unspecified |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:20:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:20:59 +0000 Subject: [Bugs] [Bug 1623596] Git plugin might be suffering from memory leak In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1623596 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW Severity|unspecified |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:21:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:21:00 +0000 Subject: [Bugs] [Bug 1597731] need 'shellcheck' in smoke. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1597731 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|medium |low Status|ASSIGNED |NEW Severity|high |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:21:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:21:00 +0000 Subject: [Bugs] [Bug 1638030] Need a regression job to test out Py3 support in Glusterfs code base In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1638030 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW Severity|unspecified |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:21:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:21:01 +0000 Subject: [Bugs] [Bug 1620580] Deleted a volume and created a new volume with similar but not the same name. The kubernetes pod still keeps on running and doesn't crash. Still possible to write to gluster mount In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1620580 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|high |low Status|ASSIGNED |NEW Severity|high |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:21:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:21:01 +0000 Subject: [Bugs] [Bug 1594857] Make smoke runs detect test cases added to patch In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1594857 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|high |low Status|ASSIGNED |NEW Severity|high |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:21:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:21:01 +0000 Subject: [Bugs] [Bug 1463273] infra: include bugzilla query in the weekly BZ email In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1463273 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW Severity|unspecified |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:21:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:21:02 +0000 Subject: [Bugs] [Bug 1598326] Setup CI for gluster-block In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1598326 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |NEW Severity|unspecified |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:23:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:23:27 +0000 Subject: [Bugs] [Bug 1690769] GlusterFS 5.5 crashes in 1x4 replicate setup. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690769 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |WORKSFORME Last Closed| |2019-05-10 13:23:27 --- Comment #5 from Amar Tumballi --- Closing as WORKSFORME in GlusterFS, as it was found to be a CPU flag issue. Thanks to Xavi for pitching in, and Artem for the patience. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 13:25:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 13:25:22 +0000 Subject: [Bugs] [Bug 1685051] New Project create request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685051 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |CURRENTRELEASE Last Closed|2019-03-04 09:26:26 |2019-05-10 13:25:22 --- Comment #8 from Amar Tumballi --- https://github.com/gluster/devblog && https://gluster.github.io/devblog/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 14:16:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 14:16:05 +0000 Subject: [Bugs] [Bug 1690753] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690753 --- Comment #4 from Atin Mukherjee --- Root cause : Since the volume stop command has been ported from synctask to mgmt_v3, the quorum check was missed out in mgmt_v3 for stop volume transaction. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 10 14:20:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 14:20:03 +0000 Subject: [Bugs] [Bug 1707081] Self heal daemon not coming up after upgrade to glusterfs-6.0-2 (intermittently) on a brick mux setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707081 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-10 14:20:03 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22667 (shd/glusterd: Serialize shd manager to prevent race condition) merged (#7) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 10 14:54:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 14:54:50 +0000 Subject: [Bugs] [Bug 1708163] tests: fix bug-1319374.c compile warnings. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708163 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-10 14:54:50 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22691 (tests: fix bug-1319374.c compile warnings.) merged (#2) on master by Raghavendra Talur -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 10 14:56:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 14:56:01 +0000 Subject: [Bugs] [Bug 1434332] crash from write-behind In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1434332 --- Comment #3 from Raghavendra G --- looks to be a dup of bz 1528558. Particularly the commit msg of the patch exactly says how a corrupted/freedup request can end up in todo list: COMMIT: https://review.gluster.org/19064 committed in master by \"Raghavendra G\" with a commit message- performance/write-behind: fix bug while handling short writes The variabled "fulfilled" in wb_fulfill_short_write is not reset to 0 while handling every member of the list. This has some interesting consequences: * If we break from the loop while processing last member of the list head->winds, req is reset to head as the list is a circular one. However, head is already fulfilled and can potentially be freed. So, we end up adding a freed request to wb_inode->todo list. This is the RCA for the crash tracked by the bug associated with this patch (Note that we saw "holder" which is freed in todo list). * If we break from the loop while processing any of the last but one member of the list head->winds, req is set to next member in the list, skipping the current request, even though it is not entirely synced. This can lead to data corruption. The fix is very simple and we've to change the code to make sure "fulfilled" reflects whether the current request is fulfilled or not and it doesn't carry history of previous requests in the list. Change-Id: Ia3d6988175a51c9e08efdb521a7b7938b01f93c8 BUG: 1528558 Signed-off-by: Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 10 19:30:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 10 May 2019 19:30:31 +0000 Subject: [Bugs] [Bug 1670031] performance regression seen with smallfile workload tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670031 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22156 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 00:19:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:19:15 +0000 Subject: [Bugs] [Bug 1098025] Disconnects of peer and brick is logged while snapshot creations were in progress during IO In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1098025 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-11 00:19:15 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 00:19:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:19:16 +0000 Subject: [Bugs] [Bug 1097224] Disconnects of peer and brick is logged while snapshot creations were in progress during IO In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1097224 Bug 1097224 depends on bug 1098025, which changed state. Bug 1098025 Summary: Disconnects of peer and brick is logged while snapshot creations were in progress during IO https://bugzilla.redhat.com/show_bug.cgi?id=1098025 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |WORKSFORME -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 00:20:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:20:14 +0000 Subject: [Bugs] [Bug 1408431] GlusterD is not starting after multiple restarts on one node and parallel volume set options from other node. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1408431 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 00:20:14 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 00:20:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:20:50 +0000 Subject: [Bugs] [Bug 1590442] GlusterFS 4.1.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1590442 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-4.1.1 Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 00:20:50 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 00:22:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:22:02 +0000 Subject: [Bugs] [Bug 1590657] Excessive logging in posix_check_internal_writes() due to NULL dict In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1590657 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-4.1.1 Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 00:22:02 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 00:23:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:23:11 +0000 Subject: [Bugs] [Bug 1591185] Gluster Block PVC fails to mount on Jenkins pod In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1591185 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-4.1.1 Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 00:23:11 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 00:25:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:25:14 +0000 Subject: [Bugs] [Bug 1609799] IPv6 setup broken after updating to 4.1 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1609799 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed|2018-08-10 08:47:02 |2019-05-11 00:25:14 --- Comment #8 from Amar Tumballi --- We did fix few things with IPv6 with glusterfs-6.0 (now 6.1 is out), please upgrade. (https://bugzilla.redhat.com/1635863) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 00:25:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:25:53 +0000 Subject: [Bugs] [Bug 1622665] clang-scan report: glusterfs issues In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1622665 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |POST -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 00:26:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:26:23 +0000 Subject: [Bugs] [Bug 1622665] clang-scan report: glusterfs issues In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1622665 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords|Reopened |Tracking, Triaged Fixed In Version|glusterfs-5.0 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 00:26:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:26:59 +0000 Subject: [Bugs] [Bug 1644761] CVE-2018-14652 glusterfs: Buffer overflow in "features/locks" translator allows for denial of service [fedora-all] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644761 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 00:26:59 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 00:27:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:27:03 +0000 Subject: [Bugs] [Bug 1645363] CVE-2018-14652 glusterfs: Buffer overflow in "features/locks" translator allows for denial of service [fedora-all] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1645363 Bug 1645363 depends on bug 1644761, which changed state. Bug 1644761 Summary: CVE-2018-14652 glusterfs: Buffer overflow in "features/locks" translator allows for denial of service [fedora-all] https://bugzilla.redhat.com/show_bug.cgi?id=1644761 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 00:27:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:27:04 +0000 Subject: [Bugs] [Bug 1645373] CVE-2018-14652 glusterfs: Buffer overflow in "features/locks" translator allows for denial of service [fedora-all] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1645373 Bug 1645373 depends on bug 1644761, which changed state. Bug 1644761 Summary: CVE-2018-14652 glusterfs: Buffer overflow in "features/locks" translator allows for denial of service [fedora-all] https://bugzilla.redhat.com/show_bug.cgi?id=1644761 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 00:28:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:28:28 +0000 Subject: [Bugs] [Bug 1702299] Custom xattrs are not healed on newly added brick In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702299 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Priority|unspecified |medium CC| |atumball at redhat.com Assignee|bugs at gluster.org |moagrawa at redhat.com Severity|unspecified |medium -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 00:32:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:32:57 +0000 Subject: [Bugs] [Bug 1147252] Pacemaker OCF volume Resource Agent fails when bricks are in different domain to the system hostname. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1147252 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-11 00:32:57 --- Comment #3 from Amar Tumballi --- We haven't seen the review comments addressed at all in above patch. As none of the current developers can work on it, marking it DEFERRED, happy to get help and close it. If no one bothers, we will revisit it after sometime. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 00:32:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:32:58 +0000 Subject: [Bugs] [Bug 1130763] Pacemaker OCF volume Resource Agent fails when bricks are in different domain to the system hostname. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1130763 Bug 1130763 depends on bug 1147252, which changed state. Bug 1147252 Summary: Pacemaker OCF volume Resource Agent fails when bricks are in different domain to the system hostname. https://bugzilla.redhat.com/show_bug.cgi?id=1147252 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |DEFERRED -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 00:35:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:35:38 +0000 Subject: [Bugs] [Bug 1164218] glfs_set_volfile_server() method causes segmentation fault when bad arguments are passed. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1164218 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-11 00:35:38 --- Comment #12 from Amar Tumballi --- with glusterfs-6.x series. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 00:37:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:37:40 +0000 Subject: [Bugs] [Bug 1231688] Bitrot: gluster volume stop show logs "[glust In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1231688 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium CC|smohan at redhat.com |atumball at redhat.com, | |khiremat at redhat.com Docs Contact|bugs at gluster.org | Assignee|bugs at gluster.org |rabhat at redhat.com QA Contact|marcobillpeter at redhat.com | Severity|high |medium --- Comment #9 from Amar Tumballi --- Most probably this is fixed already in later versions. Would like someone to validate and close it. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Sat May 11 00:39:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:39:17 +0000 Subject: [Bugs] [Bug 1241494] [Backup]: Glusterfind CLI commands need to verify the accepted names for session/volume, before failing with error(s) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1241494 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low CC| |atumball at redhat.com Assignee|sarumuga at redhat.com |sacharya at redhat.com -- You are receiving this mail because: You are the QA Contact for the bug. From bugzilla at redhat.com Sat May 11 00:41:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 00:41:37 +0000 Subject: [Bugs] [Bug 1242955] quota, snapd daemon is not running in test cluster framework In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1242955 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-11 00:41:37 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 01:56:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 01:56:59 +0000 Subject: [Bugs] [Bug 1283988] [RFE] introducing unix domain socket for I/O In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1283988 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-11 01:56:59 --- Comment #18 from Amar Tumballi --- We will revisit it after couple of releases, and hence DEFERRED. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 01:58:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 01:58:18 +0000 Subject: [Bugs] [Bug 1610751] severe drop in response time of simultaneous lookups with other-eager-lock enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1610751 Bug 1610751 depends on bug 1598056, which changed state. Bug 1598056 Summary: [Perf] mkdirs are regressing 54% on 3 way replicated volume https://bugzilla.redhat.com/show_bug.cgi?id=1598056 What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |ASSIGNED Resolution|NOTABUG |--- -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 01:58:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 01:58:19 +0000 Subject: [Bugs] [Bug 1651508] severe drop in response time of simultaneous lookups with other-eager-lock enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651508 Bug 1651508 depends on bug 1598056, which changed state. Bug 1598056 Summary: [Perf] mkdirs are regressing 54% on 3 way replicated volume https://bugzilla.redhat.com/show_bug.cgi?id=1598056 What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |ASSIGNED Resolution|NOTABUG |--- -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 01:59:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 01:59:45 +0000 Subject: [Bugs] [Bug 1291262] glusterd: fix gluster volume sync after successful deletion In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1291262 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com Assignee|prasanna.kalever at redhat.com |srakonde at redhat.com --- Comment #3 from Amar Tumballi --- Looks like these issues are now resolved. Would be good to clarify and mark this CLOSED. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:01:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:01:07 -0000 Subject: [Bugs] [Bug 1295107] Fix mem leaks related to gfapi applications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1295107 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Fixed In Version|glusterfs-3.8rc2 |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed|2016-06-16 13:53:22 |2019-05-11 02:01:02 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:01:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:01:04 +0000 Subject: [Bugs] [Bug 1300924] Fix mem leaks related to gfapi applications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1300924 Bug 1300924 depends on bug 1295107, which changed state. Bug 1295107 Summary: Fix mem leaks related to gfapi applications https://bugzilla.redhat.com/show_bug.cgi?id=1295107 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 02:01:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:01:04 +0000 Subject: [Bugs] [Bug 1311441] Fix mem leaks related to gfapi applications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1311441 Bug 1311441 depends on bug 1295107, which changed state. Bug 1295107 Summary: Fix mem leaks related to gfapi applications https://bugzilla.redhat.com/show_bug.cgi?id=1295107 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 02:03:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:03:23 +0000 Subject: [Bugs] [Bug 1299203] resolve-gids is not needed for Linux kernels v3.8 and newer In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1299203 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Severity|unspecified |low Last Closed| |2019-05-11 02:03:23 --- Comment #4 from Amar Tumballi --- Hi Vitaly Lipatov, marking it as DEFERRED, as the issue is not being looked at actively. We will revisit later. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:04:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:04:43 +0000 Subject: [Bugs] [Bug 1302203] [FEAT] selective read-only mode In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1302203 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-11 02:04:43 --- Comment #10 from Amar Tumballi --- Not a priority focus for team. Will revisit in future. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:05:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:05:52 +0000 Subject: [Bugs] [Bug 1321916] auth.allow and auth.reject option should accept ip address with negation ! In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1321916 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 02:05:52 --- Comment #5 from Amar Tumballi --- We have added tests to validate the options, and ! works. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 02:05:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:05:53 +0000 Subject: [Bugs] [Bug 1321921] auth.allow option with negation ! (!192.168.*.*) should not allow !192.168.*.* address In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1321921 Bug 1321921 depends on bug 1321916, which changed state. Bug 1321916 Summary: auth.allow and auth.reject option should accept ip address with negation ! https://bugzilla.redhat.com/show_bug.cgi?id=1321916 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 02:08:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:08:45 +0000 Subject: [Bugs] [Bug 1335373] cyclic dentry loop in inode table In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1335373 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 02:08:45 --- Comment #4 from Amar Tumballi --- This can be resolved by `gluster volume set $VOL features.sdfs enable` (the above patch finally made it to repo). -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:10:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:10:27 +0000 Subject: [Bugs] [Bug 1338593] clean posix locks based on client-id as part of server_connection_cleanup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1338593 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-11 02:10:27 --- Comment #13 from Amar Tumballi --- Any progress done? Considering we had not issues due to this in long time, marking it as DEFERRED. Will revisit when we get time. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:10:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:10:28 +0000 Subject: [Bugs] [Bug 1384388] clean posix locks based on client-id as part of server_connection_cleanup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1384388 Bug 1384388 depends on bug 1338593, which changed state. Bug 1338593 Summary: clean posix locks based on client-id as part of server_connection_cleanup https://bugzilla.redhat.com/show_bug.cgi?id=1338593 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |DEFERRED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:11:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:11:28 +0000 Subject: [Bugs] [Bug 1349620] libgfapi: Reduce memcpy in glfs write In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1349620 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low CC| |atumball at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:14:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:14:54 +0000 Subject: [Bugs] [Bug 1359153] Add reclaim lock support for posix locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1359153 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-11 02:14:54 --- Comment #3 from Amar Tumballi --- Not working on this feature immediately. Will mark it DEFERRED. Revisiting this after couple of releases. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:14:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:14:54 +0000 Subject: [Bugs] [Bug 1350744] GlusterFS 3.9.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1350744 Bug 1350744 depends on bug 1359153, which changed state. Bug 1359153 Summary: Add reclaim lock support for posix locks https://bugzilla.redhat.com/show_bug.cgi?id=1359153 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |DEFERRED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:17:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:17:16 +0000 Subject: [Bugs] [Bug 1362387] nfs-ganesha server should flush locks when in grace In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1362387 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-11 02:17:16 --- Comment #2 from Amar Tumballi --- Any goals to fix this? If this is still an issue, mark it as DEFERRED, if fixed CLOSE it as CURRENTRELEASE/WORKSFORME. Currently marking as DEFERRED as I have not heard about this in a long time. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:18:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:18:34 +0000 Subject: [Bugs] [Bug 1365898] meta: read from CIFS and windows clinet fails in meta xlators In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1365898 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-11 02:18:34 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 02:19:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:19:13 +0000 Subject: [Bugs] [Bug 1365930] meta/samba: Error while trying to access the files inside graph directory In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1365930 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-11 02:19:13 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 02:21:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:21:17 +0000 Subject: [Bugs] [Bug 1366198] We need a cluster create tool to create VMs to be used with gdeploy or to test Gluster In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1366198 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |WORKSFORME Last Closed| |2019-05-11 02:21:17 --- Comment #2 from Amar Tumballi --- https://github.com/raghavendra-talur/vagrant-cluster-creator is the place to get such tools. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:22:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:22:00 +0000 Subject: [Bugs] [Bug 1335373] cyclic dentry loop in inode table In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1335373 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(atumball at redhat.c | |om) --- Comment #5 from Raghavendra G --- (In reply to Amar Tumballi from comment #4) > This can be resolved by `gluster volume set $VOL features.sdfs enable` (the > above patch finally made it to repo). But doesn't enabling sdfs regresses performance significantly and hence not a viable solution? Note that we have run into this problem even in client's inode table. Serializing on client makes glusterfs almost useless due to serious performance drop. So, even though sdfs is available (that too only on bricks not on client, but this problem exists on client too), in its current form it cannot be used practically and hence I would say this bug is not fixed. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:22:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:22:39 +0000 Subject: [Bugs] [Bug 1371648] Support for futimens in glusterfs code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1371648 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 02:22:39 --- Comment #2 from Amar Tumballi --- https://review.gluster.org/#/c/glusterfs/+/14815/ (fixes this) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:23:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:23:43 +0000 Subject: [Bugs] [Bug 1376757] Data corruption in write ordering of rebalance and application writes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1376757 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |NEW CC| |atumball at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 02:25:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:25:11 +0000 Subject: [Bugs] [Bug 1378425] Enable setfsuid/setfsgid In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1378425 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-11 02:25:11 --- Comment #2 from Amar Tumballi --- No immediate plans to fix it, would be re-looked later. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:25:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:25:49 +0000 Subject: [Bugs] [Bug 1335373] cyclic dentry loop in inode table In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1335373 --- Comment #6 from Raghavendra G --- (In reply to Raghavendra G from comment #5) > (In reply to Amar Tumballi from comment #4) > > This can be resolved by `gluster volume set $VOL features.sdfs enable` (the > > above patch finally made it to repo). > > But doesn't enabling sdfs regresses performance significantly and hence not > a viable solution? commit 829337ed3971a53086f1562d826e79d4f3e3ed39 Author: Amar Tumballi Date: Mon Jan 28 18:30:24 2019 +0530 features/sdfs: disable by default With the feature enabled, some of the performance testing results, specially those which create millions of small files, got approximately 4x regression compared to version before enabling this. On master without this patch: 765 creates/sec On master with this patch : 3380 creates/sec Also there seems to be regression caused by this in 'ls -l' workload. On master without this patch: 3030 files/sec On master with this patch : 16610 files/sec This is a feature added to handle multiple clients parallely operating (specially those which race for file creates with same name) on a single namespace/directory. Considering that is < 3% of Gluster's usecase right now, it makes sense to disable the feature by default, so we don't penalize the default users who doesn't bother about this usecase. Also note that the client side translators, specially, distribute, replicate and disperse already handle the issue upto 99.5% of the cases without SDFS, so it makes sense to keep the feature disabled by default. Credits: Shyamsunder for running the tests and getting the numbers. Change-Id: Iec49ce1d82e621e9db25eb633fcb1d932e74f4fc Updates: bz#1670031 Signed-off-by: Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:26:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:26:56 +0000 Subject: [Bugs] [Bug 1335373] cyclic dentry loop in inode table In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1335373 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |ASSIGNED Resolution|CURRENTRELEASE |--- Keywords| |Reopened --- Comment #7 from Raghavendra G --- Moving back this to assigned till the discussion about perf impact of sdfs is resolved. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:27:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:27:03 +0000 Subject: [Bugs] [Bug 1393743] the return size of fstat sometime is not correct while write-behind feature enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1393743 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium CC| |atumball at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 02:37:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 02:37:01 +0000 Subject: [Bugs] [Bug 1335373] cyclic dentry loop in inode table In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1335373 --- Comment #8 from Raghavendra G --- Another data point for this bug is cyclic dentry loops can cause *serious* performance regression in lookup codepath. Especially if cyclic loops are formed relatively deep in directory hierarchy as they increase the dentry search time exponentially (2 pow number-of-duplicate-dentries). So, this bug is correctness as well as a performance one. And I've seen the issue in one of production environments (though we couldn't exactly measure perf impact of this in this setup, but perf impact was seen in test setups) - https://bugzilla.redhat.com/show_bug.cgi?id=1696353#c11 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 03:50:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 03:50:09 +0000 Subject: [Bugs] [Bug 1393743] the return size of fstat sometime is not correct while write-behind feature enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1393743 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 03:50:09 --- Comment #7 from Raghavendra G --- https://review.gluster.org/#/c/glusterfs/+/20549/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 03:52:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 03:52:50 +0000 Subject: [Bugs] [Bug 1335373] cyclic dentry loop in inode table In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1335373 --- Comment #9 from Raghavendra G --- s/cyclic dentry loops/stale dentries. What I've said about performance in previous comments is still valid. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 04:15:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 04:15:40 +0000 Subject: [Bugs] [Bug 1690753] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690753 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-11 04:15:40 --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22692 (glusterd: Add gluster volume stop operation to glusterd_validate_quorum()) merged (#4) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 04:15:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 04:15:41 +0000 Subject: [Bugs] [Bug 1706893] Volume stop when quorum not met is successful In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706893 Bug 1706893 depends on bug 1690753, which changed state. Bug 1690753 Summary: Volume stop when quorum not met is successful https://bugzilla.redhat.com/show_bug.cgi?id=1690753 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 09:03:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 09:03:39 +0000 Subject: [Bugs] [Bug 1417535] rebalance operation because of remove-brick failed on one of the cluster node In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1417535 --- Comment #4 from Ashish Pandey --- Yes, We have tested last few releases and did not see this issue I think this issue has been fixed and we can close this. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 09:26:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 09:26:43 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22705 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 09:26:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 09:26:44 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #656 from Worker Ant --- REVIEW: https://review.gluster.org/22705 (store: minor changes to store functions.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 09:54:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 09:54:59 +0000 Subject: [Bugs] [Bug 1417535] rebalance operation because of remove-brick failed on one of the cluster node In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1417535 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-11 09:54:59 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 10:11:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 10:11:07 +0000 Subject: [Bugs] [Bug 1335373] cyclic dentry loop in inode table In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1335373 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(atumball at redhat.c | |om) | --- Comment #10 from Amar Tumballi --- Raghavendra, Yes, SDFS has performance impact. But the feature is still present and can be enabled if user's work load demands it. My reasoning of closing this bug was mainly because for none of our users, this particular case was not hit (ie, there are no reports of it). So, focusing on that work when no one needs it is meaningless, that too when there are options available. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 10:17:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 10:17:17 +0000 Subject: [Bugs] [Bug 1426606] dht/rebalance: Increase maximum read block size from 128 KB to 1 MB In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1426606 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 10:17:17 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 10:33:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 10:33:44 +0000 Subject: [Bugs] [Bug 1430623] pthread mutexes and condition variables are not destroyed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1430623 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Status|POST |NEW CC| |atumball at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 10:35:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 10:35:11 +0000 Subject: [Bugs] [Bug 1439163] Shouldn't set inode_ctx to be LOOKUP_NOT_NEEDED before lookup fop finish In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1439163 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |NOTABUG Last Closed| |2019-05-11 10:35:11 --- Comment #2 from Amar Tumballi --- 'inode_need_lookup()' itself is now now (glusterfs-6.x) removed. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 10:43:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 10:43:41 +0000 Subject: [Bugs] [Bug 1443027] Accessing file from aux mount is not triggering afr selfheals. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1443027 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|POST |NEW CC| |atumball at redhat.com Severity|unspecified |low --- Comment #2 from Amar Tumballi --- As the patch is now abandoned. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 10:47:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 10:47:35 +0000 Subject: [Bugs] [Bug 1458197] io-stats usability/performance statistics enhancements In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1458197 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |NEW -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 10:49:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 10:49:04 +0000 Subject: [Bugs] [Bug 1464495] [Remove-brick] Hardlink migration fails with "migrate-data failed for $file [Unknown error 109023]" errors in rebalance logs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1464495 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |NEW CC| |atumball at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 10:49:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 10:49:39 +0000 Subject: [Bugs] [Bug 1464639] Possible stale read in afr due to un-notified pending xattr change In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1464639 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |NEW CC| |atumball at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 10:51:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 10:51:52 +0000 Subject: [Bugs] [Bug 1468510] Keep all Debug level log in circular in-memory buffer In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1468510 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |StudentProject Priority|unspecified |low Status|POST |NEW CC| |atumball at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 10:54:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 10:54:05 +0000 Subject: [Bugs] [Bug 1500653] Size unit should be written as GiB/TiB/MiB notation. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1500653 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords|ZStream |EasyFix, StudentProject Priority|unspecified |low Status|POST |NEW -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 10:59:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 10:59:57 +0000 Subject: [Bugs] [Bug 1521038] Core dumps in protocol/server from 3.8-fb ports In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1521038 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|POST |NEW Severity|unspecified |medium --- Comment #2 from Amar Tumballi --- Not seen any issues in last year... but considering the patch is in Abandoned state, moving back the bug to NEW. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 11:02:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:02:27 +0000 Subject: [Bugs] [Bug 1535079] syntax error in S10selinux-label-brick.sh In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1535079 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 11:02:27 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:04:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:04:06 +0000 Subject: [Bugs] [Bug 1538900] Found a missing unref in rpc_clnt_reconnect In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1538900 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |EasyFix Priority|unspecified |medium Status|POST |NEW CC| |atumball at redhat.com Severity|unspecified |medium -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 11:05:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:05:26 +0000 Subject: [Bugs] [Bug 1541438] quorum-reads option can give inconsistent reads In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1541438 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|POST |NEW CC| |atumball at redhat.com Severity|unspecified |medium -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:06:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:06:17 +0000 Subject: [Bugs] [Bug 1546649] DHT: Readdir of directory which contain directory entries is slow In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1546649 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Performance Status|POST |NEW CC| |atumball at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:07:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:07:11 +0000 Subject: [Bugs] [Bug 1559787] even if 'glupy' is disabled, the build considers it. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1559787 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|POST |CLOSED Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-11 11:07:11 --- Comment #4 from Amar Tumballi --- No more glupy in the build (glusterfs-6.x) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 11:08:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:08:20 +0000 Subject: [Bugs] [Bug 1563086] Provide a script that can be run to wait until the bricks come-online on startup. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1563086 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|POST |NEW CC| |atumball at redhat.com Severity|unspecified |low -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 11:09:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:09:37 +0000 Subject: [Bugs] [Bug 1098991] Dist-geo-rep: Invalid slave url (::: three or more colons) error out with unclear error message. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1098991 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-11 11:09:37 --- Comment #7 from Worker Ant --- REVIEW: https://review.gluster.org/22605 (cli: Validate invalid slave url) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 9 04:59:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 09 May 2019 04:59:53 +0000 Subject: [Bugs] [Bug 1708051] Capture memory consumption for gluster process at the time of throwing no memory available message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708051 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-11 11:10:33 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22688 (core: Capture process memory usage at the time of call gf_msg_nomem) merged (#6) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:13:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:13:50 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #657 from Worker Ant --- REVIEW: https://review.gluster.org/22700 (libglusterfs: Remove decompunder helper routines from symbol export) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 11:19:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:19:28 +0000 Subject: [Bugs] [Bug 1568674] [shared-storage-vol]: Dead lock between disable and enable of shared volume blaming itself In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1568674 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|POST |NEW CC| |atumball at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:21:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:21:30 +0000 Subject: [Bugs] [Bug 1576192] [Brick-Mux] Brick process can be crash at the time of call xlator cbks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1576192 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|POST |NEW CC| |atumball at redhat.com Severity|unspecified |low --- Comment #2 from Amar Tumballi --- Patch is in Abandoned state, and hence its good to mark bug as NEW again. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:23:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:23:11 +0000 Subject: [Bugs] [Bug 1578405] EIO errors when updating and deleting entries concurrently In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1578405 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|POST |ASSIGNED CC| |atumball at redhat.com Severity|unspecified |medium -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:24:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:24:19 +0000 Subject: [Bugs] [Bug 1589695] Provide a cli cmd to modify max-file-size In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1589695 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |NEW CC| |atumball at redhat.com --- Comment #3 from Amar Tumballi --- Patch is Abandoned... hence moving it back to NEW. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:25:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:25:10 +0000 Subject: [Bugs] [Bug 1589705] quick-read: separate performance.cache-size tunable to affect quick-read only In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1589705 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|POST |NEW CC| |atumball at redhat.com Severity|unspecified |medium -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 11:27:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:27:38 +0000 Subject: [Bugs] [Bug 1593078] SAS library corruption on GlusterFS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593078 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |NEW CC| |atumball at redhat.com --- Comment #5 from Amar Tumballi --- Above patch is merged, but that seems to refer to different issue. Hence moving it back to NEW. Please update with proper status if worked on. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:30:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:30:33 +0000 Subject: [Bugs] [Bug 1084508] read-ahead not working if open-behind is turned on In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1084508 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 11:30:33 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:30:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:30:34 +0000 Subject: [Bugs] [Bug 1393419] read-ahead not working if open-behind is turned on In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1393419 Bug 1393419 depends on bug 1084508, which changed state. Bug 1084508 Summary: read-ahead not working if open-behind is turned on https://bugzilla.redhat.com/show_bug.cgi?id=1084508 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:31:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:31:47 +0000 Subject: [Bugs] [Bug 1105277] Failure to execute gverify.sh. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1105277 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Status|MODIFIED |NEW CC| |atumball at redhat.com Severity|urgent |high -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:33:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:33:00 +0000 Subject: [Bugs] [Bug 1162119] DHT + rebalance :- file permission got changed (sticky bit and setgid is removed) after file migration In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1162119 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 11:33:00 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:34:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:34:00 +0000 Subject: [Bugs] [Bug 1207146] BitRot:- bitd crashed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1207146 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-11 11:34:00 --- Comment #3 from Amar Tumballi --- Not seen this in a long time now. -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Sat May 11 11:34:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:34:48 +0000 Subject: [Bugs] [Bug 1208124] BitRot :- checksum value stored in xattr is different than actual value for some file (checksum is truncated if it has terminating character as part of checksum itself) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1208124 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version|glusterfs-3.7dev-0.994 |glusterfs-3.8.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 11:34:48 -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Sat May 11 11:35:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:35:53 +0000 Subject: [Bugs] [Bug 1210696] BitRot :- bitrot logs don't have msg-id in logs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1210696 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 11:35:53 -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Sat May 11 11:37:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:37:05 +0000 Subject: [Bugs] [Bug 1215120] Bitrot file crawling is too slow In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1215120 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-4.1.1 Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 11:37:05 -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Sat May 11 11:38:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:38:06 +0000 Subject: [Bugs] [Bug 1239156] Glusterd crashed while glusterd service was shutting down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1239156 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Severity|unspecified |medium Last Closed| |2019-05-11 11:38:06 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:38:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:38:52 +0000 Subject: [Bugs] [Bug 1411598] Remove own-thread option entirely for SSL and use epoll event infrastructure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1411598 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 11:38:52 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:39:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:39:45 +0000 Subject: [Bugs] [Bug 1467510] protocol/server: Reject the connection if the graph is not ready In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1467510 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 11:39:45 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:41:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:41:42 +0000 Subject: [Bugs] [Bug 1480516] Gluster Bricks are not coming up after pod restart when bmux is ON In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1480516 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-11 11:41:42 --- Comment #3 from Amar Tumballi --- As per Comment#2 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:42:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:42:42 +0000 Subject: [Bugs] [Bug 1590385] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1590385 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version|glusterfs-6.0 |glusterfs-7.0 Resolution|--- |NEXTRELEASE Last Closed|2019-03-25 16:30:27 |2019-05-11 11:42:42 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:42:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:42:43 +0000 Subject: [Bugs] [Bug 1707393] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707393 Bug 1707393 depends on bug 1590385, which changed state. Bug 1590385 Summary: Refactor dht lookup code https://bugzilla.redhat.com/show_bug.cgi?id=1590385 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 11:44:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:44:36 +0000 Subject: [Bugs] [Bug 1615385] glusterd segfault - memcpy () at /usr/include/bits/string3.h:51 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1615385 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 11:44:36 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:45:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:45:16 +0000 Subject: [Bugs] [Bug 1623317] posix_mknod does not update trusted.pgfid.xx xattr correctly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1623317 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-4.1.10 Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 11:45:16 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:45:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:45:17 +0000 Subject: [Bugs] [Bug 1620765] posix_mknod does not update trusted.pgfid.xx xattr correctly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1620765 Bug 1620765 depends on bug 1623317, which changed state. Bug 1623317 Summary: posix_mknod does not update trusted.pgfid.xx xattr correctly https://bugzilla.redhat.com/show_bug.cgi?id=1623317 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:46:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:46:06 +0000 Subject: [Bugs] [Bug 1649895] GlusterFS 5.2 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1649895 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-5.2 Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-11 11:46:06 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:46:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:46:52 +0000 Subject: [Bugs] [Bug 1689250] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689250 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-7.0 Resolution|--- |NEXTRELEASE Last Closed| |2019-05-11 11:46:52 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:46:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:46:53 +0000 Subject: [Bugs] [Bug 1693155] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693155 Bug 1693155 depends on bug 1689250, which changed state. Bug 1689250 Summary: Excessive AFR messages from gluster showing in RHGSWA. https://bugzilla.redhat.com/show_bug.cgi?id=1689250 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 11:47:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:47:37 +0000 Subject: [Bugs] [Bug 1702299] Custom xattrs are not healed on newly added brick In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702299 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-7.0 Resolution|--- |NEXTRELEASE Last Closed| |2019-05-11 11:47:37 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:50:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:50:50 +0000 Subject: [Bugs] [Bug 1664215] Toggling readdir-ahead translator off causes some clients to umount some of its volumes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1664215 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|varao at redhat.com |spamecha at redhat.com --- Comment #5 from Amar Tumballi --- Amgad, we recommend to you to upgrade to glusterfs-6.1 (or 6.2 which comes out in another week), so you can avail some of the latest fixes. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 11:54:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 11:54:07 +0000 Subject: [Bugs] [Bug 1419950] To generate the FOPs in io-stats xlator using a code-gen framework In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1419950 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Severity|unspecified |low Last Closed| |2019-05-11 11:54:07 --- Comment #3 from Amar Tumballi --- Not a priority as we already have the code, and this is just a way to regenerate the code. Will revisit it later. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 12:54:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 12:54:58 +0000 Subject: [Bugs] [Bug 1707393] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707393 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-11 12:54:58 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22693 (cluster/dht: Refactor dht lookup functions) merged (#1) on release-6 by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 13:05:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 13:05:51 +0000 Subject: [Bugs] [Bug 1393419] read-ahead not working if open-behind is turned on In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1393419 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE Flags|needinfo?(rgowdapp at redhat.c | |om) | Last Closed| |2019-05-11 13:05:51 --- Comment #22 from Raghavendra G --- This is fixed by https://review.gluster.org/r/Ifa52d8ff017f115e83247f3396b9d27f0295ce3f -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 13:19:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 13:19:19 +0000 Subject: [Bugs] [Bug 1335373] cyclic dentry loop in inode table In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1335373 --- Comment #11 from Raghavendra G --- Stale dentries on client itable can end up making both src and dst exist after a "mv src dst" succeeds and they resolve to same inode. >From one of the production setups using SAS: md-cache doesn't resolve a (parent, basename) pair to an inode. Instead its the access layers (fuse, gfapi) that resolve the path to an inode. md-cache just gives back the stat stored in the context of inode. Which means both, * (Nodeid:139842860710128, dm_errori.sas7bdat) * (Nodeid:139842860710128, dm_errori.sas7bdat.lck) are resolving to same inode. Since they resolve to same inode, lookup is served from same cache and hence identical stats. When md-cache is turned off, both lookups go on same inode, but server-resolver on brick ignores the gfid sent by client. Instead it resolves the entry freshly and hence will get a different inode for each lookup. So the actual problem is in fuse inode table, where there are two dentries for the same inode. How did we end up in that situation? Its likely that a lookup (Nodeid:139842860710128, dm_errori.sas7bdat.lck) was racing with rename (dm_errori.sas7bdat, dm_errori.sas7bdat.lck) and rename updated inode table on client first. The lookup which hit storage/posix before rename relinked the stale dentry. Without lookups reaching bricks, client never got a chance to flush its stale dentry. The problem is exactly the same DFS is trying to solve on brick stack. So, the role of stat-prefetch here is that it is preventing lookups from reaching bricks. Once lookups reach bricks, stale dentry from inode table will be purged and the error condition goes away. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 13:28:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 13:28:17 +0000 Subject: [Bugs] [Bug 1335373] cyclic dentry loop in inode table In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1335373 --- Comment #12 from Raghavendra G --- (In reply to Amar Tumballi from comment #10) > Raghavendra, Yes, SDFS has performance impact. But the feature is still > present and can be enabled if user's work load demands it. > > My reasoning of closing this bug was mainly because for none of our users, > this particular case was not hit (ie, there are no reports of it). So, > focusing on that work when no one needs it is meaningless, that too when > there are options available. The second claim about no user has hit it is wrong. To repost the link I posted in comment #8 - https://bugzilla.redhat.com/show_bug.cgi?id=1696353#c11. Also I've found this scenario on SAS setups too - https://bugzilla.redhat.com/show_bug.cgi?id=1581306#c47 I've already reasoned about non-feasibility of sdfs: 1. Its not available on client. There is no option which can load it in client graphs. It can be loaded only on bricks. So, even if the user can accept perf impact (which is highly unlikely, see point 2) the solution is not complete. 2. The commit I posted in comment #6 measured the impact of sdfs being loaded on brick stack. We don't have any perf data to measure the impact if it gets loaded on client graph. It's highly likely that perf impact is much greater than the numbers posted in comment #6 All it takes to hit this bug is rename heavy workloads and there are many of them as the following paradigm is pretty common while copying a file (rsync, postgresql, SAS etc to give specific examples): * create a tmp file * write to it * rename tmp file to actual file -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 13:40:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 13:40:09 +0000 Subject: [Bugs] [Bug 1335373] cyclic dentry loop in inode table In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1335373 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |khiremat at redhat.com Flags| |needinfo?(khiremat at redhat.c | |om) --- Comment #13 from Raghavendra G --- (In reply to Raghavendra G from comment #12) > (In reply to Amar Tumballi from comment #10) > > Raghavendra, Yes, SDFS has performance impact. But the feature is still > > present and can be enabled if user's work load demands it. > > > > My reasoning of closing this bug was mainly because for none of our users, > > this particular case was not hit (ie, there are no reports of it). So, > > focusing on that work when no one needs it is meaningless, that too when > > there are options available. > > The second claim about no user has hit it is wrong. To repost the link I > posted in comment #8 - > https://bugzilla.redhat.com/show_bug.cgi?id=1696353#c11. Also I've found > this scenario on SAS setups too - > https://bugzilla.redhat.com/show_bug.cgi?id=1581306#c47 Also, https://bugzilla.redhat.com/show_bug.cgi?id=1600923 on geo-rep setups. IIRC, this bug when hit affects Geo-rep. Putting needinfo on Kotresh and Rochelle to explain how this bug, affects geo-rep. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 13:40:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 13:40:35 +0000 Subject: [Bugs] [Bug 1335373] cyclic dentry loop in inode table In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1335373 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rallan at redhat.com Flags| |needinfo?(rallan at redhat.com | |) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 14:17:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 14:17:40 +0000 Subject: [Bugs] [Bug 1546649] DHT: Readdir of directory which contain directory entries is slow In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1546649 --- Comment #2 from Raghavendra G --- A WIP proposal can be found at: https://github.com/gluster/glusterfs/issues/611 Note that the discussion is still in preliminary stage and its not evident yet this can be a valid solution. But, I plan to spend some more time on this to drive it to a logical conclusion of whether it can be a viable solution or not. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 11 14:26:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 14:26:31 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #658 from Worker Ant --- REVIEW: https://review.gluster.org/22633 (rpc: implement reconnect back-off strategy) merged (#4) on master by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 17:57:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 17:57:17 +0000 Subject: [Bugs] [Bug 1708926] New: Invalid memory access while executing cleanup_nad_exit Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708926 Bug ID: 1708926 Summary: Invalid memory access while executing cleanup_nad_exit Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: rkavunga at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: when executing a cleanup_and_exit, a shd daemon is crashed. This is because there is a chance that a parallel graph free thread might be executing another cleanup Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. run ./tests/bugs/glusterd/reset-brick-and-daemons-follow-quorum.t in a loop 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 17:59:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 17:59:30 +0000 Subject: [Bugs] [Bug 1708926] Invalid memory access while executing cleanup_nad_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708926 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22709 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 17:59:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 17:59:31 +0000 Subject: [Bugs] [Bug 1708926] Invalid memory access while executing cleanup_nad_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708926 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22709 (glusterfsd/cleanup: Protect graph object under a lock) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 18:02:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 18:02:14 +0000 Subject: [Bugs] [Bug 1708929] New: Add more test coverage for shd mux Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708929 Bug ID: 1708929 Summary: Add more test coverage for shd mux Product: GlusterFS Version: mainline Status: NEW Component: tests Assignee: bugs at gluster.org Reporter: rkavunga at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: We need to add more test coverage for shd. This Bugzilla can be used to track the test coverage for shd mux Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 18:08:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 18:08:46 +0000 Subject: [Bugs] [Bug 1708929] Add more test coverage for shd mux In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22697 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 11 18:08:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 11 May 2019 18:08:47 +0000 Subject: [Bugs] [Bug 1708929] Add more test coverage for shd mux In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22697 (tests/shd: Add test coverage for shd mux) posted (#5) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun May 12 04:29:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 12 May 2019 04:29:13 +0000 Subject: [Bugs] [Bug 1707081] Self heal daemon not coming up after upgrade to glusterfs-6.0-2 (intermittently) on a brick mux setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707081 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords|AutomationBlocker, | |Regression, TestBlocker | Blocks|1696807 |1704851 Depends On|1704851 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1704851 [Bug 1704851] Self heal daemon not coming up after upgrade to glusterfs-6.0-2 (intermittently) on a brick mux setup -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun May 12 14:33:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 12 May 2019 14:33:11 +0000 Subject: [Bugs] [Bug 1708116] geo-rep: Sync hangs with tarssh as sync-engine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708116 Rahul Hinduja changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1696807 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 03:15:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 03:15:39 +0000 Subject: [Bugs] [Bug 1709087] New: Capture memory consumption for gluster process at the time of throwing no memory available message Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709087 Bug ID: 1709087 Summary: Capture memory consumption for gluster process at the time of throwing no memory available message Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: core Assignee: atumball at redhat.com Reporter: moagrawa at redhat.com QA Contact: rhinduja at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1708051 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1708051 +++ Description of problem: Capture current memory usage of gluster process at the time of throwing no memory available message Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2019-05-09 04:59:53 UTC --- REVIEW: https://review.gluster.org/22688 (core: Capture process memory usage at the time of call gf_msg_nomem) posted (#2) for review on master by MOHIT AGRAWAL --- Additional comment from Worker Ant on 2019-05-11 11:10:33 UTC --- REVIEW: https://review.gluster.org/22688 (core: Capture process memory usage at the time of call gf_msg_nomem) merged (#6) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1708051 [Bug 1708051] Capture memory consumption for gluster process at the time of throwing no memory available message -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 03:15:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 03:15:39 +0000 Subject: [Bugs] [Bug 1708051] Capture memory consumption for gluster process at the time of throwing no memory available message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708051 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1709087 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1709087 [Bug 1709087] Capture memory consumption for gluster process at the time of throwing no memory available message -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 03:15:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 03:15:42 +0000 Subject: [Bugs] [Bug 1709087] Capture memory consumption for gluster process at the time of throwing no memory available message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709087 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 03:17:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 03:17:02 +0000 Subject: [Bugs] [Bug 1708531] gluster rebalance status brain splits In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708531 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Group|redhat | Version|unspecified |mainline Component|distribute |distribute Assignee|spalai at redhat.com |bugs at gluster.org QA Contact|tdesala at redhat.com | Product|Red Hat Gluster Storage |GlusterFS --- Comment #8 from Nithya Balachandran --- The version numbers do not match the downstream RHBZ builds. Moving this to the Community release. -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 03:17:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 03:17:09 +0000 Subject: [Bugs] [Bug 1709087] Capture memory consumption for gluster process at the time of throwing no memory available message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709087 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- CC|bugs at gluster.org | Assignee|atumball at redhat.com |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 03:59:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 03:59:13 +0000 Subject: [Bugs] [Bug 1708531] gluster rebalance status brain splits In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708531 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(amukherj at redhat.c | |om) --- Comment #9 from Nithya Balachandran --- (In reply to Qigang from comment #6) > Yes, the rebalance process is still running, and it has been making very > slow progress for almost a week. It looks like it is not migrating files. It > is just doing fix-layout. We have over 110TB files (and many of them are > small files) in our gluster storage. Do you have a lot of directories? If yes, then fixing the layout on those will take a lot of time but do not show up in the status. The problem with the cli commands is probably because of a mismatch in the glusterd node info files. Asking Atin to provide the steps to work around this. -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 04:36:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 04:36:59 +0000 Subject: [Bugs] [Bug 1708531] gluster rebalance status brain splits In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708531 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(wangqg1 at lenovo.co | |m) --- Comment #10 from Nithya Balachandran --- If you do not have lookup-optimize enabled on the volume, you can kill the rebalance processes, then perform the steps Atin will provide to clean up the node_state.info files. -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 04:48:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 04:48:55 +0000 Subject: [Bugs] [Bug 1707866] Thousands of duplicate files in glusterfs mountpoint directory listing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707866 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |nbalacha at redhat.com Flags| |needinfo?(sergemp at mail.ru) --- Comment #1 from Nithya Balachandran --- (In reply to Sergey from comment #0) > I have something impossible: same filenames are listed multiple times: Based on the information provided for zabbix.pm, the files are listed twice because 2 separate copies of the files exist on different bricks. > > # ls -la /mnt/VOLNAME/ > ... > -rwxrwxr-x 1 root root 3486 Jan 28 2016 check_connections.pl > -rwxr-xr-x 1 root root 153 Dec 7 2014 sigtest.sh > -rwxr-xr-x 1 root root 153 Dec 7 2014 sigtest.sh > -rwxr-xr-x 1 root root 3466 Jan 5 2015 zabbix.pm > -rwxr-xr-x 1 root root 3466 Jan 5 2015 zabbix.pm > > There're about 38981 duplicate files like that. > > The volume itself is a 3 x 2-replica: > > # gluster volume info VOLNAME > Volume Name: VOLNAME > Type: Distributed-Replicate > Volume ID: 41f9096f-0d5f-4ea9-b369-89294cf1be99 > Status: Started > Snapshot Count: 0 > Number of Bricks: 3 x 2 = 6 > Transport-type: tcp > Bricks: > Brick1: gfserver1:/srv/BRICK > Brick2: gfserver2:/srv/BRICK > Brick3: gfserver3:/srv/BRICK > Brick4: gfserver4:/srv/BRICK > Brick5: gfserver5:/srv/BRICK > Brick6: gfserver6:/srv/BRICK > Options Reconfigured: > transport.address-family: inet > nfs.disable: on > cluster.self-heal-daemon: enable > config.transport: tcp > > The "duplicated" file on individual bricks: > > [gfserver1]# ls -la /srv/BRICK/zabbix.pm > ---------T 2 root root 0 Apr 23 2018 /srv/BRICK/zabbix.pm > > [gfserver2]# ls -la /srv/BRICK/zabbix.pm > ---------T 2 root root 0 Apr 23 2018 /srv/BRICK/zabbix.pm > These 2 are linkto files and they are pointing to the data files on gfserver3 and gfserver4. > [gfserver3]# ls -la /srv/BRICK/zabbix.pm > -rwxr-xr-x 2 root root 3466 Jan 5 2015 /srv/BRICK/zabbix.pm > > [gfserver4]# ls -la /srv/BRICK/zabbix.pm > -rwxr-xr-x 2 root root 3466 Jan 5 2015 /srv/BRICK/zabbix.pm > > [gfserver5]# ls -la /srv/BRICK/zabbix.pm > -rwxr-xr-x 2 root root 3466 Jan 5 2015 /srv/BRICK/zabbix.pm > > [gfserver6]# ls -la /srv/BRICK/zabbix.pm > -rwxr-xr-x. 2 root root 3466 Jan 5 2015 /srv/BRICK/zabbix.pm > These are the problematic files. I do not know why or how they ended up existing on these bricks as well. > Attributes: > > [gfserver1]# getfattr -m . -d -e hex /srv/BRICK/zabbix.pm > # file: srv/BRICK/zabbix.pm > trusted.afr.VOLNAME-client-1=0x000000000000000000000000 > trusted.afr.VOLNAME-client-4=0x000000000000000000000000 > trusted.gfid=0x422a7ccf018242b58e162a65266326c3 > trusted.glusterfs.dht.linkto=0x6678666565642d7265706c69636174652d3100 > > [gfserver2]# getfattr -m . -d -e hex /srv/BRICK/zabbix.pm > # file: srv/BRICK/zabbix.pm > trusted.gfid=0x422a7ccf018242b58e162a65266326c3 > > trusted.gfid2path. > 3b27d24cad4dceef=0x30303030303030302d303030302d303030302d303030302d3030303030 > 303030303030312f7a61626269782e706d > trusted.glusterfs.dht.linkto=0x6678666565642d7265706c69636174652d3100 > > [gfserver3]# getfattr -m . -d -e hex /srv/BRICK/zabbix.pm > # file: srv/BRICK/zabbix.pm > trusted.afr.VOLNAME-client-2=0x000000000000000000000000 > trusted.afr.VOLNAME-client-3=0x000000000000000000000000 > trusted.gfid=0x422a7ccf018242b58e162a65266326c3 > > [gfserver4]# getfattr -m . -d -e hex /srv/BRICK/zabbix.pm > # file: srv/BRICK/zabbix.pm > trusted.gfid=0x422a7ccf018242b58e162a65266326c3 > > trusted.gfid2path. > 3b27d24cad4dceef=0x30303030303030302d303030302d303030302d303030302d3030303030 > 303030303030312f7a61626269782e706d > > [gfserver5]# getfattr -m . -d -e hex /srv/BRICK/zabbix.pm > # file: srv/BRICK/zabbix.pm > trusted.bit-rot.version=0x03000000000000005c4f813c000bc71b > trusted.gfid=0x422a7ccf018242b58e162a65266326c3 > > [gfserver6]# getfattr -m . -d -e hex /srv/BRICK/zabbix.pm > # file: srv/BRICK/zabbix.pm > security.selinux=0x73797374656d5f753a6f626a6563745f723a7661725f743a733000 > trusted.bit-rot.version=0x02000000000000005add0ffc000eb66a > trusted.gfid=0x422a7ccf018242b58e162a65266326c3 > > Not sure why exactly it happened... Maybe because some nodes were suddenly > upgraded from centos6's gluster ~3.7 to centos7's 4.1, and some files > happened to be on nodes that they're not supposed to be on. > > Currently all the nodes are online: > > # gluster pool list > UUID Hostname State > aac9e1a5-018f-4d27-9d77-804f0f1b2f13 gfserver5 Connected > 98b22070-b579-4a91-86e3-482cfcc9c8cf gfserver3 Connected > 7a9841a1-c63c-49f2-8d6d-a90ae2ff4e04 gfserver4 Connected > 955f5551-8b42-476c-9eaa-feab35b71041 gfserver6 Connected > 7343d655-3527-4bcf-9d13-55386ccb5f9c gfserver1 Connected > f9c79a56-830d-4056-b437-a669a1942626 gfserver2 Connected > 45a72ab3-b91e-4076-9cf2-687669647217 localhost Connected > > and have glusterfs-3.12.14-1.el6.x86_64 (Centos 6) and > glusterfs-4.1.7-1.el7.x86_64 (Centos 7) installed. > > > Expected result > --------------- > > This looks like a layout issue, so: > > gluster volume rebalance VOLNAME fix-layout start > > should fix it, right? > No, fix layout only changes the layout and this is not a layout problem. This is a problem with duplicate files on the bricks. > > Actual result > ------------- > > I tried: > gluster volume rebalance VOLNAME fix-layout start > gluster volume rebalance VOLNAME start > gluster volume rebalance VOLNAME start force > gluster volume heal VOLNAME full > Those took 5 to 40 minutes to complete, but the duplicates are still there. Can you send the rebalance logs for this volume from all the nodes? How many clients to do you have accessing the volume? Are the duplicate files seen only in the root of the volume or in subdirs as well? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:14:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:14:46 +0000 Subject: [Bugs] [Bug 1708531] gluster rebalance status brain splits In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708531 Qigang changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(amukherj at redhat.c | |om) | |needinfo?(wangqg1 at lenovo.co | |m) | --- Comment #11 from Qigang --- Yes, we have a lot of directories. The rebalance log file /var/log/glusterfs/gv0-rebalance.log can give scanned folder information and thus can be viewed as a status report. But it is way too slow and there isn't a progress bar. We have no idea how long it will take. ----one item in rebalance.log file---- [2019-05-13 05:09:10.236068] I [MSGID: 109081] [dht-common.c:4379:dht_setxattr] 0-gv0-dht: fixing the layout of /yangdk2_data/data/meitu/meitu_img/train/gameplaying/954742707 ----one item in rebalance.log file---- The rebalance process is only observed in the two newly added pairs. Our lookup-optimize setting is off. Thank you very much. -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:15:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:15:18 +0000 Subject: [Bugs] [Bug 1709130] New: thin-arbiter lock release fixes Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709130 Bug ID: 1709130 Summary: thin-arbiter lock release fixes Product: GlusterFS Version: 6 Status: NEW Component: replicate Keywords: Triaged Assignee: bugs at gluster.org Reporter: ravishankar at redhat.com CC: bugs at gluster.org Depends On: 1698449 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1698449 +++ Description of problem: Addresses post-merge review comments for https://review.gluster.org/#/c/glusterfs/+/20095/ --- Additional comment from Worker Ant on 2019-04-10 11:53:45 UTC --- REVIEW: https://review.gluster.org/22543 (afr: thin-arbiter lock release fixes) posted (#1) for review on master by Ravishankar N --- Additional comment from Worker Ant on 2019-05-10 12:11:56 UTC --- REVIEW: https://review.gluster.org/22543 (afr: thin-arbiter lock release fixes) merged (#8) on master by Pranith Kumar Karampuri Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1698449 [Bug 1698449] thin-arbiter lock release fixes -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:15:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:15:18 +0000 Subject: [Bugs] [Bug 1698449] thin-arbiter lock release fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698449 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1709130 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1709130 [Bug 1709130] thin-arbiter lock release fixes -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 05:16:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:16:03 +0000 Subject: [Bugs] [Bug 1709130] thin-arbiter lock release fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709130 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |ravishankar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:36:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:36:14 +0000 Subject: [Bugs] [Bug 1709143] New: [Thin-arbiter] : send correct error code in case of failure Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709143 Bug ID: 1709143 Summary: [Thin-arbiter] : send correct error code in case of failure Product: GlusterFS Version: 6 Status: NEW Component: replicate Assignee: bugs at gluster.org Reporter: aspandey at redhat.com CC: bugs at gluster.org Depends On: 1686711 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1686711 +++ Description of problem: Handle error code properly. https://review.gluster.org/#/c/glusterfs/+/21933/6/xlators/cluster/afr/src/afr-transaction.c at 1306 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2019-03-08 06:13:44 UTC --- REVIEW: https://review.gluster.org/22327 (cluster/afr : TA: Return actual error code in case of failure) posted (#1) for review on master by Ashish Pandey --- Additional comment from Worker Ant on 2019-03-14 12:11:57 UTC --- REVIEW: https://review.gluster.org/22327 (cluster/afr : TA: Return actual error code in case of failure) merged (#3) on master by Pranith Kumar Karampuri Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686711 [Bug 1686711] [Thin-arbiter] : send correct error code in case of failure -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:36:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:36:14 +0000 Subject: [Bugs] [Bug 1686711] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686711 Ashish Pandey changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1709143 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1709143 [Bug 1709143] [Thin-arbiter] : send correct error code in case of failure -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:36:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:36:15 +0000 Subject: [Bugs] [Bug 1686711] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686711 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22711 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:36:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:36:16 +0000 Subject: [Bugs] [Bug 1686711] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686711 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22711 (cluster/afr : TA: Return actual error code in case of failure) posted (#1) for review on release-6 by Ashish Pandey -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:36:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:36:35 +0000 Subject: [Bugs] [Bug 1709145] New: [Thin-arbiter] : send correct error code in case of failure Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709145 Bug ID: 1709145 Summary: [Thin-arbiter] : send correct error code in case of failure Product: GlusterFS Version: 6 Status: NEW Component: replicate Assignee: bugs at gluster.org Reporter: ravishankar at redhat.com CC: aspandey at redhat.com, bugs at gluster.org Depends On: 1686711 Blocks: 1709143 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1686711 +++ Description of problem: Handle error code properly. https://review.gluster.org/#/c/glusterfs/+/21933/6/xlators/cluster/afr/src/afr-transaction.c at 1306 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2019-03-08 06:13:44 UTC --- REVIEW: https://review.gluster.org/22327 (cluster/afr : TA: Return actual error code in case of failure) posted (#1) for review on master by Ashish Pandey --- Additional comment from Worker Ant on 2019-03-14 12:11:57 UTC --- REVIEW: https://review.gluster.org/22327 (cluster/afr : TA: Return actual error code in case of failure) merged (#3) on master by Pranith Kumar Karampuri --- Additional comment from Worker Ant on 2019-05-13 05:36:16 UTC --- REVIEW: https://review.gluster.org/22711 (cluster/afr : TA: Return actual error code in case of failure) posted (#1) for review on release-6 by Ashish Pandey Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686711 [Bug 1686711] [Thin-arbiter] : send correct error code in case of failure https://bugzilla.redhat.com/show_bug.cgi?id=1709143 [Bug 1709143] [Thin-arbiter] : send correct error code in case of failure -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:36:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:36:35 +0000 Subject: [Bugs] [Bug 1686711] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686711 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1709145 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1709145 [Bug 1709145] [Thin-arbiter] : send correct error code in case of failure -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:36:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:36:35 +0000 Subject: [Bugs] [Bug 1709143] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709143 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1709145 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1709145 [Bug 1709145] [Thin-arbiter] : send correct error code in case of failure -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:36:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:36:57 +0000 Subject: [Bugs] [Bug 1709145] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709145 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Assignee|bugs at gluster.org |aspandey at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:37:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:37:30 +0000 Subject: [Bugs] [Bug 1709130] thin-arbiter lock release fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709130 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1709145 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1709145 [Bug 1709145] [Thin-arbiter] : send correct error code in case of failure -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 05:37:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:37:30 +0000 Subject: [Bugs] [Bug 1709145] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709145 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1709130 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1709130 [Bug 1709130] thin-arbiter lock release fixes -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 05:38:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:38:01 +0000 Subject: [Bugs] [Bug 1686711] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686711 --- Comment #4 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22711 (cluster/afr : TA: Return actual error code in case of failure) posted (#2) for review on release-6 by Ashish Pandey -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:38:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:38:02 +0000 Subject: [Bugs] [Bug 1686711] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686711 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22711 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:38:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:38:03 +0000 Subject: [Bugs] [Bug 1709143] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709143 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22711 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:38:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:38:04 +0000 Subject: [Bugs] [Bug 1709143] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709143 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22711 (cluster/afr : TA: Return actual error code in case of failure) posted (#2) for review on release-6 by Ashish Pandey -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:39:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:39:12 +0000 Subject: [Bugs] [Bug 1709130] thin-arbiter lock release fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709130 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1709145 |1709143 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1709143 [Bug 1709143] [Thin-arbiter] : send correct error code in case of failure https://bugzilla.redhat.com/show_bug.cgi?id=1709145 [Bug 1709145] [Thin-arbiter] : send correct error code in case of failure -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 05:39:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:39:12 +0000 Subject: [Bugs] [Bug 1709143] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709143 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1709130 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1709130 [Bug 1709130] thin-arbiter lock release fixes -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:39:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:39:12 +0000 Subject: [Bugs] [Bug 1709145] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709145 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1709130 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1709130 [Bug 1709130] thin-arbiter lock release fixes -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 05:39:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:39:42 +0000 Subject: [Bugs] [Bug 1709145] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709145 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DUPLICATE Last Closed| |2019-05-13 05:39:42 --- Comment #1 from Ravishankar N --- *** This bug has been marked as a duplicate of bug 1709143 *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 05:39:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:39:42 +0000 Subject: [Bugs] [Bug 1709143] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709143 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ravishankar at redhat.com --- Comment #2 from Ravishankar N --- *** Bug 1709145 has been marked as a duplicate of this bug. *** -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:39:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:39:43 +0000 Subject: [Bugs] [Bug 1709143] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709143 Bug 1709143 depends on bug 1709145, which changed state. Bug 1709145 Summary: [Thin-arbiter] : send correct error code in case of failure https://bugzilla.redhat.com/show_bug.cgi?id=1709145 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DUPLICATE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 05:41:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 05:41:42 +0000 Subject: [Bugs] [Bug 1563086] Provide a script that can be run to wait until the bricks come-online on startup. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1563086 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |VERIFIED --- Comment #2 from Pranith Kumar K --- https://github.com/gluster/gluster-block/blob/master/extras/wait-for-bricks.sh has the script. This was mostly done for gluster-block project so we got it merged in that project. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 06:17:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 06:17:43 +0000 Subject: [Bugs] [Bug 1335373] cyclic dentry loop in inode table In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1335373 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(khiremat at redhat.c |needinfo- |om) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 06:18:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 06:18:49 +0000 Subject: [Bugs] [Bug 1703948] Self-heal daemon resources are not cleaned properly after a ec fini In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-13 06:18:49 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22644 (ec/shd: Cleanup self heal daemon resources during ec fini) merged (#10) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 06:20:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 06:20:23 +0000 Subject: [Bugs] [Bug 1423442] group files to set volume options should have comments In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1423442 --- Comment #3 from Niels de Vos --- (In reply to Amar Tumballi from comment #2) > If we start considering json/yaml model for reading profile file. I do not know of a reason why we want to change the format completely. It would be sufficient to have lines starting with # as comments and ignore these while applying the key/values. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 06:56:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 06:56:38 +0000 Subject: [Bugs] [Bug 1706603] Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706603 Sunil Kumar Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1709174 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 09:26:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 09:26:49 +0000 Subject: [Bugs] [Bug 1644164] Use GF_ATOMIC ops to update inode->nlookup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644164 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22715 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 09:26:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 09:26:50 +0000 Subject: [Bugs] [Bug 1644164] Use GF_ATOMIC ops to update inode->nlookup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644164 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|CURRENTRELEASE |--- Keywords| |Reopened --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22715 (core: Use GF_ATOMIC ops to update inode->nlookup) posted (#2) for review on release-3.12 by None -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 09:27:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 09:27:25 +0000 Subject: [Bugs] [Bug 1709248] New: [geo-rep]: Non-root - Unable to set up mountbroker root directory and group Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709248 Bug ID: 1709248 Summary: [geo-rep]: Non-root - Unable to set up mountbroker root directory and group Product: GlusterFS Version: mainline Status: NEW Component: geo-replication Keywords: Regression Severity: high Assignee: bugs at gluster.org Reporter: sunkumar at redhat.com CC: avishwan at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, rallan at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1708043 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1708043 +++ Description of problem: ======================== # gluster-mountbroker setup /var/mountbroker-root geogroup Traceback (most recent call last): File "/usr/sbin/gluster-mountbroker", line 396, in runcli() File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 225, in runcli cls.run(args) File "/usr/sbin/gluster-mountbroker", line 230, in run args.group]) File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 127, in execute_in_peers raise GlusterCmdException((rc, out, err, " ".join(cmd))) gluster.cliutils.cliutils.GlusterCmdException: (1, '', 'Unable to end. Error : Success\n', 'gluster system:: execute mountbroker.py node-setup /var/mountbroker-root geogroup') Version-Release number of selected component (if applicable): How reproducible: ================= Always Steps to Reproduce: ==================== 1. Create a master and slave volume 2. Create groups on all slaves - geogroup 3. Add user to the group created on all slaves - geoaccount 4. Set up mountbroker root directory and group 5. Add slave vol and user to the mountbroker service Actual results: =============== Unable to set up the mountbroker root directory and group successfully Expected results: ================= Should be able to set up mountbroker root directory and group successfully. Should be able to add slave vol and user to the mountbroker service Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1708043 [Bug 1708043] [geo-rep]: Non-root - Unable to set up mountbroker root directory and group -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 09:27:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 09:27:47 +0000 Subject: [Bugs] [Bug 1709248] [geo-rep]: Non-root - Unable to set up mountbroker root directory and group In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709248 Sunny Kumar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |sunkumar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 09:32:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 09:32:22 +0000 Subject: [Bugs] [Bug 1709248] [geo-rep]: Non-root - Unable to set up mountbroker root directory and group In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709248 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22716 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 09:32:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 09:32:23 +0000 Subject: [Bugs] [Bug 1709248] [geo-rep]: Non-root - Unable to set up mountbroker root directory and group In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709248 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22716 (geo-rep : fix mountbroker setup) posted (#1) for review on master by Sunny Kumar -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 09:47:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 09:47:57 +0000 Subject: [Bugs] [Bug 1709262] New: Use GF_ATOMIC ops to update inode->nlookup Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709262 Bug ID: 1709262 Summary: Use GF_ATOMIC ops to update inode->nlookup Product: GlusterFS Version: experimental Status: NEW Component: core Keywords: Reopened Assignee: bugs at gluster.org Reporter: roidinev at gmail.com CC: bugs at gluster.org, moagrawa at redhat.com Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1644164 +++ Description of problem: Use GF_ATOMIC ops to update inode->nlookup Version-Release number of selected component (if applicable): How reproducible: It is just an enhancement to use atomic ops to update lookup counter. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2018-10-30 07:09:56 UTC --- REVIEW: https://review.gluster.org/21305 (core: Use GF_ATOMIC ops to update inode->nlookup) posted (#3) for review on master by MOHIT AGRAWAL --- Additional comment from Worker Ant on 2018-10-30 11:40:54 UTC --- REVIEW: https://review.gluster.org/21305 (core: Use GF_ATOMIC ops to update inode->nlookup) posted (#3) for review on master by MOHIT AGRAWAL --- Additional comment from Shyamsundar on 2019-03-25 16:31:39 UTC --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ --- Additional comment from Worker Ant on 2019-05-13 09:26:50 UTC --- REVIEW: https://review.gluster.org/22715 (core: Use GF_ATOMIC ops to update inode->nlookup) posted (#2) for review on release-3.12 by None -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 09:48:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 09:48:59 +0000 Subject: [Bugs] [Bug 1709262] Use GF_ATOMIC ops to update inode->nlookup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709262 --- Comment #1 from baul --- fix in 3.12 release -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 09:50:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 09:50:51 +0000 Subject: [Bugs] [Bug 1644164] Use GF_ATOMIC ops to update inode->nlookup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644164 --- Comment #5 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22715 (core: Use GF_ATOMIC ops to update inode->nlookup) posted (#4) for review on release-3.12 by None -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 09:50:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 09:50:52 +0000 Subject: [Bugs] [Bug 1644164] Use GF_ATOMIC ops to update inode->nlookup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644164 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22715 | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 09:50:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 09:50:53 +0000 Subject: [Bugs] [Bug 1709262] Use GF_ATOMIC ops to update inode->nlookup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709262 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22715 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 09:50:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 09:50:54 +0000 Subject: [Bugs] [Bug 1709262] Use GF_ATOMIC ops to update inode->nlookup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709262 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22715 (core: Use GF_ATOMIC ops to update inode->nlookup) posted (#4) for review on release-3.12 by None -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 13 11:07:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 11:07:13 +0000 Subject: [Bugs] [Bug 1707728] geo-rep: Sync hangs with tarssh as sync-engine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707728 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-13 11:07:13 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22684 (geo-rep: Fix sync hang with tarssh) merged (#4) on master by Sunny Kumar -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 13 15:44:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 13 May 2019 15:44:44 +0000 Subject: [Bugs] [Bug 1687811] core dump generated while running the test ./tests/00-geo-rep/georep-basic-dr-rsync-arbiter.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687811 Niels de Vos changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1702951 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 04:23:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 04:23:08 +0000 Subject: [Bugs] [Bug 1563086] Provide a script that can be run to wait until the bricks come-online on startup. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1563086 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|VERIFIED |CLOSED Fixed In Version| |gluster-block-v0.4 Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-14 04:23:08 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 05:50:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 05:50:43 +0000 Subject: [Bugs] [Bug 1709653] New: geo-rep: With heavy rename workload geo-rep log if flooded Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709653 Bug ID: 1709653 Summary: geo-rep: With heavy rename workload geo-rep log if flooded Product: GlusterFS Version: mainline Status: NEW Component: geo-replication Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: With heavy rename workload as mentioned in bug 1694820, geo-rep log is flooded with gfid conflict resolution logs. All the entries to be fixed are logged at INFO level. Version-Release number of selected component (if applicable): mainline How reproducible: Always Steps to Reproduce: 1. Setup geo-rep and run the reproducer given in bug 1694820 Actual results: Geo-rep log is flooded Expected results: Geo-rep log should not be flooded. Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 05:50:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 05:50:57 +0000 Subject: [Bugs] [Bug 1709653] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709653 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 05:53:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 05:53:31 +0000 Subject: [Bugs] [Bug 1709653] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709653 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22720 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 05:53:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 05:53:33 +0000 Subject: [Bugs] [Bug 1709653] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709653 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22720 (geo-rep: Convert gfid conflict resolutiong logs into debug) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 06:11:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 06:11:53 +0000 Subject: [Bugs] [Bug 1709660] New: Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709660 Bug ID: 1709660 Summary: Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT Product: GlusterFS Version: 6 Status: NEW Component: disperse Keywords: Reopened Assignee: bugs at gluster.org Reporter: pkarampu at redhat.com CC: bugs at gluster.org Depends On: 1706603 Blocks: 1709174 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1706603 +++ Mount crashing in an 'ASSERT' that checks the inode size in the function ec-inode-write.c Program terminated with signal 11, Segmentation fault. #0 0x00007f5502715dcb in ec_manager_truncate (fop=0x7f53ff654910, state=) at ec-inode-write.c:1475 1475 GF_ASSERT(ec_get_inode_size(fop, fop->locks[0].lock->loc.inode, This is the corresponding thread: Thread 1 (Thread 0x7f54f907a700 (LWP 31806)): #0 0x00007f5502715dcb in ec_manager_truncate (fop=0x7f53ff654910, state=) at ec-inode-write.c:1475 #1 0x00007f55026f399b in __ec_manager (fop=0x7f53ff654910, error=0) at ec-common.c:2698 #2 0x00007f55026f3b78 in ec_resume (fop=0x7f53ff654910, error=0) at ec-common.c:481 #3 0x00007f55026f3c9f in ec_complete (fop=0x7f53ff654910) at ec-common.c:554 #4 0x00007f5502711d0c in ec_inode_write_cbk (frame=, this=0x7f54fc186380, cookie=0x3, op_ret=0, op_errno=0, prestat=0x7f54f9079920, poststat=0x7f54f9079990, xdata=0x0) at ec-inode-write.c:156 #5 0x00007f550298224c in client3_3_ftruncate_cbk (req=, iov=, count=, myframe=0x7f5488ba7870) at client-rpc-fops.c:1415 #6 0x00007f5510476960 in rpc_clnt_handle_reply (clnt=clnt at entry=0x7f54fc4a1330, pollin=pollin at entry=0x7f549b65dc30) at rpc-clnt.c:778 #7 0x00007f5510476d03 in rpc_clnt_notify (trans=, mydata=0x7f54fc4a1360, event=, data=0x7f549b65dc30) at rpc-clnt.c:971 #8 0x00007f5510472a73 in rpc_transport_notify (this=this at entry=0x7f54fc4a1500, event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7f549b65dc30) at rpc-transport.c:538 #9 0x00007f5505067566 in socket_event_poll_in (this=this at entry=0x7f54fc4a1500, notify_handled=) at socket.c:2315 #10 0x00007f5505069b0c in socket_event_handler (fd=90, idx=99, gen=472, data=0x7f54fc4a1500, poll_in=1, poll_out=0, poll_err=0) at socket.c:2467 #11 0x00007f551070c7e4 in event_dispatch_epoll_handler (event=0x7f54f9079e80, event_pool=0x5625cf18aa30) at event-epoll.c:583 #12 event_dispatch_epoll_worker (data=0x7f54fc296580) at event-epoll.c:659 #13 0x00007f550f50ddd5 in start_thread (arg=0x7f54f907a700) at pthread_create.c:307 #14 0x00007f550edd5ead in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 We're crashing in this part of the code, specifically: (gdb) l 1470 1471 /* This shouldn't fail because we have the inode locked. */ 1472 /* Inode size doesn't need to be updated under locks, because 1473 * conflicting operations won't be in-flight 1474 */ 1475 GF_ASSERT(ec_get_inode_size(fop, fop->locks[0].lock->loc.inode, 1476 &cbk->iatt[0].ia_size)); 1477 cbk->iatt[1].ia_size = fop->user_size; 1478 /* This shouldn't fail because we have the inode locked. */ 1479 GF_ASSERT(ec_set_inode_size(fop, fop->locks[0].lock->loc.inode, (gdb) p *cbk $7 = {list = {next = 0x7f53ff654950, prev = 0x7f53ff654950}, answer_list = {next = 0x7f53ff654960, prev = 0x7f53ff654960}, fop = 0x7f53ff654910, next = 0x0, idx = 3, op_ret = 0, op_errno = 0, count = 1, mask = 8, xdata = 0x0, dict = 0x0, int32 = 0, uintptr = {0, 0, 0}, size = 0, version = {0, 0}, inode = 0x0, fd = 0x0, statvfs = {f_bsize = 0, f_frsize = 0, f_blocks = 0, f_bfree = 0, f_bavail = 0, f_files = 0, f_ffree = 0, f_favail = 0, f_fsid = 0, f_flag = 0, f_namemax = 0, __f_spare = {0, 0, 0, 0, 0, 0}}, iatt = {{ia_ino = 12285952560967103824, ia_gfid = "\337\b\247-\b\344F?\200x?\276\265P", ia_dev = 2224, ia_type = IA_IFREG, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 1 '\001', write = 1 '\001', exec = 1 '\001'}, group = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}, other = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}}, ia_nlink = 1, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 491520, ia_blksize = 4096, ia_blocks = 3840, ia_atime = 1557032019, ia_atime_nsec = 590833985, ia_mtime = 1557032498, ia_mtime_nsec = 824769499, ia_ctime = 1557032498, ia_ctime_nsec = 824769499}, {ia_ino = 12285952560967103824, ia_gfid = "\337\b\247-\b\344F?\200x?\276\265P", ia_dev = 2224, ia_type = IA_IFREG, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 1 '\001', write = 1 '\001', exec = 1 '\001'}, group = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}, other = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}}, ia_nlink = 1, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, ia_blksize = 4096, ia_blocks = 0, ia_atime = 1557032019, ia_atime_nsec = 590833985, ia_mtime = 1557032498, ia_mtime_nsec = 824769499, ia_ctime = 1557032498, ia_ctime_nsec = 824769499}, {ia_ino = 0, ia_gfid = '\000' , ia_dev = 0, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}, {ia_ino = 0, ia_gfid = '\000' , ia_dev = 0, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = { read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}, {ia_ino = 0, ia_gfid = '\000' , ia_dev = 0, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}}, flock = {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = {len = 0, data = '\000' }}, vector = 0x0, buffers = 0x0, str = 0x0, entries = {{list = {next = 0x7f54429a3188, prev = 0x7f54429a3188}, {next = 0x7f54429a3188, prev = 0x7f54429a3188}}, d_ino = 0, d_off = 0, d_len = 0, d_type = 0, d_stat = {ia_ino = 0, ia_gfid = '\000' , ia_dev = 0, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}, dict = 0x0, inode = 0x0, d_name = 0x7f54429a3230 ""}, offset = 0, what = GF_SEEK_DATA} (gdb) p *cbk->fop $8 = {id = 24, refs = 3, state = 4, minimum = 1, expected = 1, winds = 0, jobs = 1, error = 0, parent = 0x7f532c197d80, xl = 0x7f54fc186380, req_frame = 0x7f532c048c60, frame = 0x7f54700662d0, cbk_list = { next = 0x7f54429a2a10, prev = 0x7f54429a2a10}, answer_list = {next = 0x7f54429a2a20, prev = 0x7f54429a2a20}, pending_list = {next = 0x7f533007acc0, prev = 0x7f5477976ac0}, answer = 0x7f54429a2a10, lock_count = 0, locked = 0, locks = {{lock = 0x0, fop = 0x0, owner_list = {next = 0x7f53ff6549a0, prev = 0x7f53ff6549a0}, wait_list = {next = 0x7f53ff6549b0, prev = 0x7f53ff6549b0}, update = {_gf_false, _gf_false}, dirty = {_gf_false, _gf_false}, optimistic_changelog = _gf_false, base = 0x0, size = 0, waiting_flags = 0, fl_start = 0, fl_end = 0}, {lock = 0x0, fop = 0x0, owner_list = {next = 0x7f53ff654a10, prev = 0x7f53ff654a10}, wait_list = { next = 0x7f53ff654a20, prev = 0x7f53ff654a20}, update = {_gf_false, _gf_false}, dirty = {_gf_false, _gf_false}, optimistic_changelog = _gf_false, base = 0x0, size = 0, waiting_flags = 0, fl_start = 0, fl_end = 0}}, first_lock = 0, lock = {spinlock = 0, mutex = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' , __align = 0}}, flags = 0, first = 0, mask = 8, healing = 0, remaining = 0, received = 8, good = 8, uid = 0, gid = 0, wind = 0x7f5502710ae0 , handler = 0x7f5502715c50 , resume = 0x0, cbks = {access = 0x7f550270af50 , create = 0x7f550270af50 , discard = 0x7f550270af50 , entrylk = 0x7f550270af50 , fentrylk = 0x7f550270af50 , fallocate = 0x7f550270af50 , flush = 0x7f550270af50 , fsync = 0x7f550270af50 , fsyncdir = 0x7f550270af50 , getxattr = 0x7f550270af50 , fgetxattr = 0x7f550270af50 , heal = 0x7f550270af50 , fheal = 0x7f550270af50 , inodelk = 0x7f550270af50 , finodelk = 0x7f550270af50 , link = 0x7f550270af50 , lk = 0x7f550270af50 , lookup = 0x7f550270af50 , mkdir = 0x7f550270af50 , mknod = 0x7f550270af50 , open = 0x7f550270af50 , opendir = 0x7f550270af50 , readdir = 0x7f550270af50 , readdirp = 0x7f550270af50 , readlink = 0x7f550270af50 , readv = 0x7f550270af50 , removexattr = 0x7f550270af50 , fremovexattr = 0x7f550270af50 , rename = 0x7f550270af50 , rmdir = 0x7f550270af50 , setattr = 0x7f550270af50 , fsetattr = 0x7f550270af50 , setxattr = 0x7f550270af50 , fsetxattr = 0x7f550270af50 , stat = 0x7f550270af50 , fstat = 0x7f550270af50 , statfs = 0x7f550270af50 , symlink = 0x7f550270af50 , truncate = 0x7f550270af50 , ftruncate = 0x7f550270af50 , unlink = 0x7f550270af50 , writev = 0x7f550270af50 , xattrop = 0x7f550270af50 , fxattrop = 0x7f550270af50 , zerofill = 0x7f550270af50 , seek = 0x7f550270af50 , ipc = 0x7f550270af50 }, data = 0x7f5477976a60, heal = 0x0, healer = {next = 0x7f53ff654b08, prev = 0x7f53ff654b08}, user_size = 0, head = 0, use_fd = 1, xdata = 0x0, dict = 0x0, int32 = 0, uint32 = 0, size = 0, offset = 0, mode = {0, 0}, entrylk_cmd = ENTRYLK_LOCK, entrylk_type = ENTRYLK_RDLCK, xattrop_flags = GF_XATTROP_ADD_ARRAY, dev = 0, inode = 0x0, fd = 0x7f54dfb900a0, iatt = {ia_ino = 0, ia_gfid = '\000' , ia_dev = 0, ia_type = IA_INVAL, ia_prot = { suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}, str = {0x0, 0x0}, loc = {{path = 0x0, name = 0x0, inode = 0x0, parent = 0x0, gfid = '\000' , pargfid = '\000' }, {path = 0x0, name = 0x0, inode = 0x0, parent = 0x0, gfid = '\000' , pargfid = '\000' }}, flock = {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = {len = 0, data = '\000' }}, vector = 0x0, buffers = 0x0, seek = GF_SEEK_DATA, errstr = 0x0} Checking further the lock: (gdb) p fop->locks[0] $5 = {lock = 0x0, fop = 0x0, owner_list = {next = 0x7f53ff6549a0, prev = 0x7f53ff6549a0}, wait_list = {next = 0x7f53ff6549b0, prev = 0x7f53ff6549b0}, update = {_gf_false, _gf_false}, dirty = {_gf_false, _gf_false}, optimistic_changelog = _gf_false, base = 0x0, size = 0, waiting_flags = 0, fl_start = 0, fl_end = 0} (gdb) p fop->locks[0].lock $6 = (ec_lock_t *) 0x0 (gdb) p fop->locks[0].lock->loc.inode Cannot access memory at address 0x90 --- Additional comment from Worker Ant on 2019-05-06 00:01:57 UTC --- REVIEW: https://review.gluster.org/22660 (cluster/ec: Reopen shouldn't happen with O_TRUNC) merged (#1) on master by Pranith Kumar Karampuri --- Additional comment from Worker Ant on 2019-05-07 12:25:28 UTC --- REVIEW: https://review.gluster.org/22674 (tests: Test openfd heal doesn't truncate files) posted (#1) for review on master by Pranith Kumar Karampuri Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1706603 [Bug 1706603] Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 06:11:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 06:11:53 +0000 Subject: [Bugs] [Bug 1706603] Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706603 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1709660 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1709660 [Bug 1709660] Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 06:26:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 06:26:25 +0000 Subject: [Bugs] [Bug 1709660] Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709660 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22721 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 06:26:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 06:26:26 +0000 Subject: [Bugs] [Bug 1709660] Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709660 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22721 (cluster/ec: Reopen shouldn't happen with O_TRUNC) posted (#1) for review on release-6 by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 07:09:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 07:09:23 +0000 Subject: [Bugs] [Bug 1708926] Invalid memory access while executing cleanup_nad_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708926 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |pkarampu at redhat.com Flags| |needinfo?(rkavunga at redhat.c | |om) --- Comment #2 from Pranith Kumar K --- Rafi, Could you share the bt of the core so that it is easier to understand why exactly it crashed? Pranith -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 07:17:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 07:17:27 +0000 Subject: [Bugs] [Bug 1709685] New: Geo-rep: Value of pending entry operations in detail status output is going up after each synchronization. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709685 Bug ID: 1709685 Summary: Geo-rep: Value of pending entry operations in detail status output is going up after each synchronization. Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: geo-replication Severity: high Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: avishwan at redhat.com, bugs at gluster.org, khiremat at redhat.com, srangana at redhat.com, vnosov at stonefly.com Depends On: 1512093 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1512093 +++ Description of problem: Geo-replication is created, it is active, no errors is reported by CLI and at glusterfs logs. Start write files to master volume. Use geo-replication detail status command. Value of "Number of entry Operations pending" is growing after each synchronization. It never going down except after the replication is restarted. Version-Release number of selected component (if applicable): GlusterFS 3.12.2 How reproducible: 100% Steps to Reproduce: 1. See the attached file "gluster_geo_repl_bug.txt". Actual results: Value of pending entries is not going down after synchronization. Expected results: In case of successful synchronization the value has to be "0". Additional info: 1. See the attached glusterfs log files from the master system "gluster_logs_master_182.tgz". 2. See the attached glusterfs log files from the slave system "gluster_logs_slave_183.tgz". --- Additional comment from vnosov on 2017-11-10 20:09 UTC --- --- Additional comment from vnosov on 2017-11-10 20:10 UTC --- --- Additional comment from Aravinda VK on 2017-11-27 06:10:58 UTC --- IO from mount stopped while running status command? Entry count in status changes continuously as long as IO is going on in Master volume. (Crawl -> Increment count -> Create entries in slave -> Decrement the count) This status column is not related to checkpoint. Only if no IO in Master mount and status shows entry count then it might have some issue. --- Additional comment from vnosov on 2018-09-13 16:20:16 UTC --- Problem is reproduced with GlusterFS 3.12.13. --- Additional comment from Shyamsundar on 2018-10-23 14:54:08 UTC --- Release 3.12 has been EOLd and this bug was still found to be in the NEW state, hence moving the version to mainline, to triage the same and take appropriate actions. --- Additional comment from Yaniv Kaul on 2019-04-17 10:00:18 UTC --- (In reply to Shyamsundar from comment #5) > Release 3.12 has been EOLd and this bug was still found to be in the NEW > state, hence moving the version to mainline, to triage the same and take > appropriate actions. Status? --- Additional comment from Shyamsundar on 2019-04-17 10:12:48 UTC --- (In reply to Yaniv Kaul from comment #6) > (In reply to Shyamsundar from comment #5) > > Release 3.12 has been EOLd and this bug was still found to be in the NEW > > state, hence moving the version to mainline, to triage the same and take > > appropriate actions. > > Status? Will need to check with the assignee or component maintainer, which is Kotresh in both cases. @Kotresh request an update here? Thanks. --- Additional comment from Worker Ant on 2019-04-23 05:21:26 UTC --- REVIEW: https://review.gluster.org/22603 (geo-rep: Fix entries and metadata counters in geo-rep status) posted (#1) for review on master by Kotresh HR --- Additional comment from Worker Ant on 2019-04-24 16:18:18 UTC --- REVIEW: https://review.gluster.org/22603 (geo-rep: Fix entries and metadata counters in geo-rep status) merged (#2) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1512093 [Bug 1512093] Value of pending entry operations in detail status output is going up after each synchronization. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 07:17:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 07:17:27 +0000 Subject: [Bugs] [Bug 1512093] Value of pending entry operations in detail status output is going up after each synchronization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1512093 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1709685 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1709685 [Bug 1709685] Geo-rep: Value of pending entry operations in detail status output is going up after each synchronization. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 07:17:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 07:17:42 +0000 Subject: [Bugs] [Bug 1709685] Geo-rep: Value of pending entry operations in detail status output is going up after each synchronization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709685 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 07:20:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 07:20:25 +0000 Subject: [Bugs] [Bug 1709685] Geo-rep: Value of pending entry operations in detail status output is going up after each synchronization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709685 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22722 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 07:20:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 07:20:26 +0000 Subject: [Bugs] [Bug 1709685] Geo-rep: Value of pending entry operations in detail status output is going up after each synchronization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709685 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22722 (geo-rep: Fix entries and metadata counters in geo-rep status) posted (#1) for review on release-6 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 07:33:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 07:33:23 +0000 Subject: [Bugs] [Bug 1698566] shd crashed while executing ./tests/bugs/core/bug-1432542-mpx-restart-crash.t in CI In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698566 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |WORKSFORME Last Closed| |2019-05-14 07:33:23 --- Comment #2 from Mohammed Rafi KC --- This is most already fixed by the patch https://review.gluster.org/#/c/glusterfs/+/22468/. To verify this I ran the test in a loop overnight and I haven't seen any crash. So closing the bug. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 08:39:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 08:39:26 +0000 Subject: [Bugs] [Bug 1709734] New: Geo-rep: Data inconsistency while syncing heavy renames with constant destination name Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709734 Bug ID: 1709734 Summary: Geo-rep: Data inconsistency while syncing heavy renames with constant destination name Product: GlusterFS Version: 6 Status: NEW Component: geo-replication Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community This bug was initially created as a copy of Bug #1694820 I am copying this bug because: Description of problem: This problem only exists in heavy RENAME workload where parallel rename are frequent or doing RENAME with existing destination. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Run frequent RENAME on master mount and check for sync in slave. Ex - while true; do uuid="`uuidgen`"; echo "some data" > "test$uuid"; mv "test$uuid" "test" -f; done Actual results: Does not syncs renames properly and creates multiples files in slave. Expected results: Should sync renames. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 08:41:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 08:41:00 +0000 Subject: [Bugs] [Bug 1709734] Geo-rep: Data inconsistency while syncing heavy renames with constant destination name In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709734 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 08:42:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 08:42:17 +0000 Subject: [Bugs] [Bug 1709737] New: geo-rep: Always uses rsync even with use_tarssh set to true Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709737 Bug ID: 1709737 Summary: geo-rep: Always uses rsync even with use_tarssh set to true Product: GlusterFS Version: 6 Status: NEW Component: geo-replication Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community This bug was initially created as a copy of Bug #1707686 I am copying this bug because: Description of problem: It always uses rsync to sync data even though use_tarssh is set to true. Version-Release number of selected component (if applicable): mainilne How reproducible: Always Steps to Reproduce: 1. Setup geo-rep between two gluster volumes and start it 2. Set use_tarssh to true 3. Write a huge file on master 4. ps -ef | egrep "tar|rsync" while the big file is syncing to slave. It show rsync process instead of tar over ssh Actual results: use_tarssh has not effect on sync-engine. It's always using rsync. Expected results: use_tarssh should use tarssh and not rsync Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 08:43:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 08:43:28 +0000 Subject: [Bugs] [Bug 1709737] geo-rep: Always uses rsync even with use_tarssh set to true In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709737 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 08:44:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 08:44:36 +0000 Subject: [Bugs] [Bug 1709738] New: geo-rep: Sync hangs with tarssh as sync-engine Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709738 Bug ID: 1709738 Summary: geo-rep: Sync hangs with tarssh as sync-engine Product: GlusterFS Version: 4.1 Status: NEW Component: geo-replication Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community This bug was initially created as a copy of Bug #1707728 I am copying this bug because: Description of problem: When the heavy workload as below on master, the sync is hung with sync engine tarssh. It's working fine with rsync as sync engine. for i in {1..10000} do echo "sample data" > //file$i mv -f //file$i / 3. Start geo-rep and wait till the status is changelog crawl 4. Configure sync-jobs to 1 gluster vol geo-rep :: config sync-jobs 1 5. Configure sync engine to tarssh gluster vol geo-rep :: config sync-method tarssh 6. Stop the geo-rep 7. Do the I/O on mastermnt as mentioned for i in {1..10000} do echo "sample data" > //file$i mv -f //file$i / References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709738 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Version|4.1 |6 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 08:45:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 08:45:52 +0000 Subject: [Bugs] [Bug 1709738] geo-rep: Sync hangs with tarssh as sync-engine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709738 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 08:52:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 08:52:23 +0000 Subject: [Bugs] [Bug 1709734] Geo-rep: Data inconsistency while syncing heavy renames with constant destination name In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709734 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22723 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 08:52:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 08:52:24 +0000 Subject: [Bugs] [Bug 1709734] Geo-rep: Data inconsistency while syncing heavy renames with constant destination name In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709734 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22723 (geo-rep: Fix rename with existing destination with same gfid) posted (#1) for review on release-6 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 08:53:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 08:53:32 +0000 Subject: [Bugs] [Bug 1709737] geo-rep: Always uses rsync even with use_tarssh set to true In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709737 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22724 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 08:53:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 08:53:33 +0000 Subject: [Bugs] [Bug 1709737] geo-rep: Always uses rsync even with use_tarssh set to true In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709737 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22724 (geo-rep: Fix sync-method config) posted (#1) for review on release-6 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 08:54:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 08:54:50 +0000 Subject: [Bugs] [Bug 1709738] geo-rep: Sync hangs with tarssh as sync-engine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709738 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22725 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 08:54:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 08:54:51 +0000 Subject: [Bugs] [Bug 1709738] geo-rep: Sync hangs with tarssh as sync-engine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709738 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22725 (geo-rep: Fix sync hang with tarssh) posted (#1) for review on release-6 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 09:22:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 09:22:27 +0000 Subject: [Bugs] [Bug 1221629] Bitd crashed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1221629 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-14 09:22:27 --- Comment #6 from Amar Tumballi --- Not seen this in any latest releases. -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Tue May 14 09:22:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 09:22:57 +0000 Subject: [Bugs] [Bug 1221869] Even after reseting the bitrot and scrub demons are running In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1221869 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-14 09:22:57 -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Tue May 14 09:24:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 09:24:27 +0000 Subject: [Bugs] [Bug 1603220] glusterfs-server depends on deprecated liblvm2app In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1603220 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-14 09:24:27 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 09:27:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 09:27:59 +0000 Subject: [Bugs] [Bug 1187296] No way to gracefully rotate the libgfapi Samba vfs_glusterfs logfile. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1187296 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-14 09:27:59 --- Comment #5 from Amar Tumballi --- comment#3 talks about the issue where glusterfs can't handle SIGHUP of the process which uses gfapi. GFAPI changes are not happening now, and hence marking it as DEFERRED. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 09:28:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 09:28:00 +0000 Subject: [Bugs] [Bug 1369452] No way to gracefully rotate the libgfapi Samba vfs_glusterfs logfile. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1369452 Bug 1369452 depends on bug 1187296, which changed state. Bug 1187296 Summary: No way to gracefully rotate the libgfapi Samba vfs_glusterfs logfile. https://bugzilla.redhat.com/show_bug.cgi?id=1187296 What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |DEFERRED -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 09:28:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 09:28:00 +0000 Subject: [Bugs] [Bug 1369453] No way to gracefully rotate the libgfapi Samba vfs_glusterfs logfile. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1369453 Bug 1369453 depends on bug 1187296, which changed state. Bug 1187296 Summary: No way to gracefully rotate the libgfapi Samba vfs_glusterfs logfile. https://bugzilla.redhat.com/show_bug.cgi?id=1187296 What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |DEFERRED -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 09:31:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 09:31:17 +0000 Subject: [Bugs] [Bug 1349620] libgfapi: Reduce memcpy in glfs write In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1349620 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|spandit at commvault.com |rkothiya at redhat.com QA Contact|sdharane at redhat.com | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 09:33:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 09:33:27 +0000 Subject: [Bugs] [Bug 1449675] adding return statement in dict_unref() of libglusterfs/src/dict.c In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1449675 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-14 09:33:27 --- Comment #2 from Amar Tumballi --- I see that this effort is not active.. marking as DEFERRED, please feel free to reopen it, once the work gets revisited. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 09:35:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 09:35:04 +0000 Subject: [Bugs] [Bug 1414892] quota : refactor the glusterd-quota code for upgrade scenarios In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1414892 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-14 09:35:04 --- Comment #1 from Amar Tumballi --- Not a focus right now! -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 09:35:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 09:35:38 +0000 Subject: [Bugs] [Bug 1032382] autogen.sh warnings with automake-1.14 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1032382 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com Assignee|kaushal at redhat.com |bugs at gluster.org -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 09:38:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 09:38:25 +0000 Subject: [Bugs] [Bug 1397397] glusterfs_ctx_new() initializes the lock, other functions do not need to In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1397397 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |DUPLICATE Last Closed| |2019-05-14 09:38:25 --- Comment #2 from Amar Tumballi --- *** This bug has been marked as a duplicate of bug 1397419 *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 09:38:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 09:38:25 +0000 Subject: [Bugs] [Bug 1397419] glusterfs_ctx_defaults_init is re-initializing ctx->locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1397419 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ndevos at redhat.com --- Comment #6 from Amar Tumballi --- *** Bug 1397397 has been marked as a duplicate of this bug. *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 09:39:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 09:39:15 +0000 Subject: [Bugs] [Bug 1079709] Possible error on Gluster documentation (PDF, Introduction to Gluster Architecture, v3.1) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1079709 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |INSUFFICIENT_DATA Last Closed| |2019-05-14 09:39:15 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 09:56:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 09:56:53 +0000 Subject: [Bugs] [Bug 1291262] glusterd: fix gluster volume sync after successful deletion In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1291262 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |DUPLICATE Last Closed| |2019-05-14 09:56:53 --- Comment #4 from Sanju --- commit 0b450b8b35 has fixed this issue. So, closing this bug as a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1605077 (which is used to track the bug, while the patch has posted for review). Thanks, Sanju *** This bug has been marked as a duplicate of bug 1605077 *** -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 09:56:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 09:56:53 +0000 Subject: [Bugs] [Bug 1605077] If a node disconnects during volume delete, it assumes deleted volume as a freshly created volume when it is back online In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1605077 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |prasanna.kalever at redhat.com --- Comment #7 from Sanju --- *** Bug 1291262 has been marked as a duplicate of this bug. *** -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 11:10:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 11:10:57 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22696 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 11:10:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 11:10:58 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #659 from Worker Ant --- REVIEW: https://review.gluster.org/22696 (rpc: include nfs specific files in build only if gNFS is enabled) posted (#2) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 13:08:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 13:08:53 +0000 Subject: [Bugs] [Bug 1709143] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709143 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22711 (cluster/afr : TA: Return actual error code in case of failure) merged (#2) on release-6 by Ashish Pandey -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 13:34:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 13:34:32 +0000 Subject: [Bugs] [Bug 1643716] "OSError: [Errno 40] Too many levels of symbolic links" when syncing deletion of directory hierarchy In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1643716 Shwetha K Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED --- Comment #1 from Shwetha K Acharya --- This bug is fixed in gluster 5 release. Bugzilla link: https://bugzilla.redhat.com/show_bug.cgi?id=1597540, updating to next version will solve this problem. Please confirm your gluster version and kindly close this bug. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 13:37:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 13:37:37 +0000 Subject: [Bugs] [Bug 1652887] Geo-rep help looks to have a typo. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1652887 Shwetha K Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 13:42:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 13:42:43 +0000 Subject: [Bugs] [Bug 1643716] "OSError: [Errno 40] Too many levels of symbolic links" when syncing deletion of directory hierarchy In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1643716 Shwetha K Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(lohmaier+rhbz at gma | |il.com) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 14 16:01:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 16:01:36 +0000 Subject: [Bugs] [Bug 1708926] Invalid memory access while executing cleanup_nad_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708926 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(rkavunga at redhat.c | |om) | --- Comment #3 from Mohammed Rafi KC --- Stack trace of thread 30877: #0 0x0000000000406a07 cleanup_and_exit (glusterfsd) #1 0x0000000000406b5d glusterfs_sigwaiter (glusterfsd) #2 0x00007f51000cd58e start_thread (libpthread.so.0) #3 0x00007f50ffd1d683 __clone (libc.so.6) Stack trace of thread 30879: #0 0x00007f51000d3a7a futex_abstimed_wait_cancelable (libpthread.so.0) #1 0x00007f51003b8616 syncenv_task (libglusterfs.so.0) #2 0x00007f51003b9240 syncenv_processor (libglusterfs.so.0) #3 0x00007f51000cd58e start_thread (libpthread.so.0) #4 0x00007f50ffd1d683 __clone (libc.so.6) Stack trace of thread 30881: #0 0x00007f50ffd14cdf __GI___select (libc.so.6) #1 0x00007f51003ef1cd runner (libglusterfs.so.0) #2 0x00007f51000cd58e start_thread (libpthread.so.0) #3 0x00007f50ffd1d683 __clone (libc.so.6) Stack trace of thread 30880: #0 0x00007f51000d3a7a futex_abstimed_wait_cancelable (libpthread.so.0) #1 0x00007f51003b8616 syncenv_task (libglusterfs.so.0) #2 0x00007f51003b9240 syncenv_processor (libglusterfs.so.0) #3 0x00007f51000cd58e start_thread (libpthread.so.0) #4 0x00007f50ffd1d683 __clone (libc.so.6) Stack trace of thread 30876: #0 0x00007f51000d7500 __GI___nanosleep (libpthread.so.0) #1 0x00007f510038a346 gf_timer_proc (libglusterfs.so.0) #2 0x00007f51000cd58e start_thread (libpthread.so.0) #3 0x00007f50ffd1d683 __clone (libc.so.6) Stack trace of thread 30882: #0 0x00007f50ffd1e06e epoll_ctl (libc.so.6) #1 0x00007f51003d931e event_handled_epoll (libglusterfs.so.0) #2 0x00007f50eed9a781 socket_event_poll_in (socket.so) #3 0x00007f51003d8c9b event_dispatch_epoll_handler (libglusterfs.so.0) #4 0x00007f51000cd58e start_thread (libpthread.so.0) #5 0x00007f50ffd1d683 __clone (libc.so.6) Stack trace of thread 30875: #0 0x00007f51000cea6d __GI___pthread_timedjoin_ex (libpthread.so.0) #1 0x00007f51003d8387 event_dispatch_epoll (libglusterfs.so.0) #2 0x0000000000406592 main (glusterfsd) #3 0x00007f50ffc44413 __libc_start_main (libc.so.6) #4 0x00000000004067de _start (glusterfsd) Stack trace of thread 30878: #0 0x00007f50ffce97f8 __GI___nanosleep (libc.so.6) #1 0x00007f50ffce96fe __sleep (libc.so.6) #2 0x00007f51003a4f5a pool_sweeper (libglusterfs.so.0) #3 0x00007f51000cd58e start_thread (libpthread.so.0) #4 0x00007f50ffd1d683 __clone (libc.so.6) Stack trace of thread 30883: #0 0x00007f51000d6b8d __lll_lock_wait (libpthread.so.0) #1 0x00007f51000cfda9 __GI___pthread_mutex_lock (libpthread.so.0) #2 0x00007f510037cd1f _gf_msg_plain_internal (libglusterfs.so.0) #3 0x00007f510037ceb3 _gf_msg_plain (libglusterfs.so.0) #4 0x00007f5100382d43 gf_log_dump_graph (libglusterfs.so.0) #5 0x00007f51003b514f glusterfs_process_svc_attach_volfp (libglusterfs.so.0) #6 0x000000000040b16d mgmt_process_volfile (glusterfsd) #7 0x0000000000410792 mgmt_getspec_cbk (glusterfsd) #8 0x00007f51003256b1 rpc_clnt_handle_reply (libgfrpc.so.0) #9 0x00007f5100325a53 rpc_clnt_notify (libgfrpc.so.0) #10 0x00007f5100322973 rpc_transport_notify (libgfrpc.so.0) #11 0x00007f50eed9a45c socket_event_poll_in (socket.so) #12 0x00007f51003d8c9b event_dispatch_epoll_handler (libglusterfs.so.0) #13 0x00007f51000cd58e start_thread (libpthread.so.0) #14 0x00007f50ffd1d683 __clone (libc.so.6) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 16:03:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 16:03:16 +0000 Subject: [Bugs] [Bug 1708926] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708926 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Summary|Invalid memory access while |Invalid memory access while |executing cleanup_nad_exit |executing cleanup_and_exit -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 16:13:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 16:13:37 +0000 Subject: [Bugs] [Bug 1709959] New: Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 Bug ID: 1709959 Summary: Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message Product: GlusterFS Version: 4.1 Hardware: x86_64 OS: Linux Status: NEW Component: core Severity: high Assignee: bugs at gluster.org Reporter: jeff.bischoff at turbonomic.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 16:20:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 16:20:54 +0000 Subject: [Bugs] [Bug 1709959] Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 --- Comment #1 from Jeff Bischoff --- Description of problem: Various pods (for example Heketi) in our Kubernetes cluster enter an infinite crash loop. It seems to be an issue with the gluster mounts. The error message always contains "file exists" Version-Release number of selected component (if applicable): 4.1.7 How reproducible: No known steps to reproduce, but it happens every few days in multiple environments that we are running. Steps to Reproduce: 1. Kubernetes environment is healthy, with working gluster mounts 2. Various containers enter a crash loop, with "file exists" error message The bricks appear to be offline at this point. Actual results: Gluster mounts seem to fail and never recover. Expected results: Gluster mounts are stable, or at least automatically remount after a failure. Additional info: See comments below. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 16:24:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 16:24:22 +0000 Subject: [Bugs] [Bug 1709959] Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 --- Comment #2 from Jeff Bischoff --- This is the Kubernetes version from our latest failing environments: $ kubectl version Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} Here's how the heketi pod looks: $ kubectl describe pod heketi-7495cdc5fd-xqmxr Error from server (NotFound): pods "heketi-7495cdc5fd-xqmxr" not found [turbo at node1 ~]$ kubectl describe pod heketi-7495cdc5fd-xqmxr -n default Name: heketi-7495cdc5fd-xqmxr Namespace: default Priority: 0 PriorityClassName: Node: node1/10.10.168.25 Start Time: Mon, 06 May 2019 02:11:42 +0000 Labels: glusterfs=heketi-pod heketi=pod pod-template-hash=7495cdc5fd Annotations: Status: Running IP: 10.233.90.85 Controlled By: ReplicaSet/heketi-7495cdc5fd Containers: heketi: Container ID: docker://fed61190bf01d149027f187e49a8428e0654fc347de9a9164665f40247c543b3 Image: heketi/heketi:dev Image ID: docker-pullable://heketi/heketi at sha256:bcbf709fd084793e4ff0379f08ca44f71154c270d3a74df2bd146472e2d28402 Port: 8080/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: ContainerCannotRun Message: error while creating mount source path '/var/lib/kubelet/pods/4a2574bb-6fa4-11e9-a315-005056b83c80/volumes/kubernetes.io~glusterfs/db': mkdir /var/lib/kubelet/pods/4a2574bb-6fa4-11e9-a315-005056b83c80/volumes/kubernetes.io~glusterfs/db: file exists Exit Code: 128 Started: Tue, 14 May 2019 14:34:55 +0000 Finished: Tue, 14 May 2019 14:34:55 +0000 Ready: False Restart Count: 1735 Liveness: http-get http://:8080/hello delay=30s timeout=3s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/hello delay=3s timeout=3s period=10s #success=1 #failure=3 Environment: HEKETI_USER_KEY: HEKETI_ADMIN_KEY: HEKETI_EXECUTOR: kubernetes HEKETI_FSTAB: /var/lib/heketi/fstab HEKETI_SNAPSHOT_LIMIT: 14 HEKETI_KUBE_GLUSTER_DAEMONSET: y HEKETI_IGNORE_STALE_OPERATIONS: true Mounts: /etc/heketi from config (rw) /var/lib/heketi from db (rw) /var/run/secrets/kubernetes.io/serviceaccount from heketi-service-account-token-ntfx2 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: db: Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime) EndpointsName: heketi-storage-endpoints Path: heketidbstorage ReadOnly: false config: Type: Secret (a volume populated by a Secret) SecretName: heketi-config-secret Optional: false heketi-service-account-token-ntfx2: Type: Secret (a volume populated by a Secret) SecretName: heketi-service-account-token-ntfx2 Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 3m36s (x40124 over 6d) kubelet, node1 Back-off restarting failed container I'm not at all familiar with gluster brick logs, but looking at those it appears that some health checks failed, and they were shut down? ``` [2019-05-08 13:48:33.642896] W [MSGID: 113075] [posix-helpers.c:1895:posix_fs_health_check] 0-vol_a720850474f6ce7ae6c57dcc60284b1f-posix: aio_write() on /var/lib/heketi/mounts/vg_c197878af606e71a874ad28e3bd7e4e1/brick_0343 10c050aa134b254316068472b4cc/brick/.glusterfs/health_check returned [Resource temporarily unavailable] [2019-05-08 13:48:33.748515] M [MSGID: 113075] [posix-helpers.c:1962:posix_health_check_thread_proc] 0-vol_a720850474f6ce7ae6c57dcc60284b1f-posix: health-check failed, going down [2019-05-08 13:48:33.999892] M [MSGID: 113075] [posix-helpers.c:1981:posix_health_check_thread_proc] 0-vol_a720850474f6ce7ae6c57dcc60284b1f-posix: still alive! -> SIGTERM [2019-05-08 13:49:04.598861] W [glusterfsd.c:1514:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dd5) [0x7f2a27df4dd5] -->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xe5) [0x562568920d65] -->/usr/sbin/glusterfsd(cleanup_an d_exit+0x6b) [0x562568920b8b] ) 0-: received signum (15), shutting down ``` ...and... ``` [2019-05-06 03:34:39.698647] I [MSGID: 115036] [server.c:483:server_rpc_notify] 0-vol_95b9fad3e8bce2d1c9aac2da2af46057-server: disconnecting connection from node1-21644-2019/05/06-02:17:50:364351-vol_95b9fad3e8bce2d1c9aac2da2af46057-client-0-0-0 [2019-05-06 03:34:39.698956] I [MSGID: 101055] [client_t.c:444:gf_client_unref] 0-vol_95b9fad3e8bce2d1c9aac2da2af46057-server: Shutting down connection node1-21644-2019/05/06-02:17:50:364351-vol_95b9fad3e8bce2d1c9aac2da2af46057-client-0-0-0 [2019-05-06 03:34:54.929155] I [addr.c:55:compare_addr_and_update] 0-/var/lib/heketi/mounts/vg_c197878af606e71a874ad28e3bd7e4e1/brick_1adb1f9dad96381614efc30fe22943a7/brick: allowed = "*", received addr = "10.10.168.25" [2019-05-06 03:34:54.929223] I [login.c:111:gf_auth] 0-auth/login: allowed user names: 57cda2e6-f071-4ec4-b1a5-04f43f91a204 [2019-05-06 03:34:54.929253] I [MSGID: 115029] [server-handshake.c:495:server_setvolume] 0-vol_95b9fad3e8bce2d1c9aac2da2af46057-server: accepted client from node1-23801-2019/05/06-03:34:54:882971-vol_95b9fad3e8bce2d1c9aac2da2af46057-client-0-0-0 (version: 3.12.2) [2019-05-07 11:50:30.502074] I [MSGID: 115036] [server.c:483:server_rpc_notify] 0-vol_95b9fad3e8bce2d1c9aac2da2af46057-server: disconnecting connection from node1-23801-2019/05/06-03:34:54:882971-vol_95b9fad3e8bce2d1c9aac2da2af46057-client-0-0-0 [2019-05-07 11:50:30.524408] I [MSGID: 101055] [client_t.c:444:gf_client_unref] 0-vol_95b9fad3e8bce2d1c9aac2da2af46057-server: Shutting down connection node1-23801-2019/05/06-03:34:54:882971-vol_95b9fad3e8bce2d1c9aac2da2af46057-client-0-0-0 [2019-05-07 11:54:45.456189] W [glusterfsd.c:1514:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dd5) [0x7f684d7ccdd5] -->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xe5) [0x556c12dead65] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x556c12deab8b] ) 0-: received signum (15), shutting down Full gluster logs here: [glusterfs.zip](https://github.com/heketi/heketi/files/3178441/glusterfs.zip) I tried to get the heketi container logs, but it appears they don't exist: $ kubectl logs -n default heketi-7495cdc5fd-xqmxr -p failed to open log file "/var/log/pods/4a2574bb-6fa4-11e9-a315-005056b83c80/heketi/1741.log": open /var/log/pods/4a2574bb-6fa4-11e9-a315-005056b83c80/heketi/1741.log: no such file or directory Gluster seems to indicate that all of my bricks are offline: [root at node1 /]# gluster gluster> volume list heketidbstorage vol_050f6767658bceaed3e4c58693f3220e vol_0f8c60645f1014a72b9999036d6244e2 vol_27ef5ea360e90f459d56082bc2b7be9f vol_59090c2fd20479d553a5baa153d3fcbd vol_673aef3de9147eaede26b7169ebf5f6e vol_6848649bb5d29d60985d4d59380caafe vol_744c23296132470b8639599b837ae671 vol_76c6f946e64d2150f99503953127c647 vol_84337b6825c0eb3d7a0e6008b65dd757 vol_9e6cad52d8a8e2e7f8febe2709ef253a vol_a720850474f6ce7ae6c57dcc60284b1f vol_c98a28cd587883dc2882c00695b02d52 vol_ced2ac693a19d4ae53af897eaf13bd86 vol_dcece16823bead8503333ef11c022775 vol_e6cbcf7bcb912d6c9725f3390f96b4b3 vol_eaa27e3100f78bff42ff337f163fee0f vol_ff040fc48f8bd16727423b59ac7244c6 gluster> volume status Status of volume: heketidbstorage Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_a1 6f9f0374fe5db948a60a017a3f5e60/brick N/A N/A N N/A Task Status of Volume heketidbstorage ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_050f6767658bceaed3e4c58693f3220e Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_a6 d6af28e7525bbe3563948f4f9455bd/brick N/A N/A N N/A Task Status of Volume vol_050f6767658bceaed3e4c58693f3220e ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_0f8c60645f1014a72b9999036d6244e2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_ed 81730bd5d36a151cf5163f379474b4/brick N/A N/A N N/A Task Status of Volume vol_0f8c60645f1014a72b9999036d6244e2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_27ef5ea360e90f459d56082bc2b7be9f Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_76 14e5014a0e402630a0e1fd776acf0a/brick N/A N/A N N/A Task Status of Volume vol_27ef5ea360e90f459d56082bc2b7be9f ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_59090c2fd20479d553a5baa153d3fcbd Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_93 e6fdd290a8e963d927de4a1115d17e/brick N/A N/A N N/A Task Status of Volume vol_59090c2fd20479d553a5baa153d3fcbd ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_673aef3de9147eaede26b7169ebf5f6e Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_0e d4f7f941de388cda678fe273e9ceb4/brick N/A N/A N N/A Task Status of Volume vol_673aef3de9147eaede26b7169ebf5f6e ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_6848649bb5d29d60985d4d59380caafe Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_5f 8b153d183d154b789425f5f5c8f912/brick N/A N/A N N/A Task Status of Volume vol_6848649bb5d29d60985d4d59380caafe ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_744c23296132470b8639599b837ae671 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_c1 1ac2780871f7d759a3da1c27e01941/brick N/A N/A N N/A Task Status of Volume vol_744c23296132470b8639599b837ae671 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_76c6f946e64d2150f99503953127c647 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_03 ef27d7e6834e4c7519a8db19369742/brick N/A N/A N N/A Task Status of Volume vol_76c6f946e64d2150f99503953127c647 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_84337b6825c0eb3d7a0e6008b65dd757 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_a3 cef78a5914a2808da0b5736e3daec7/brick N/A N/A N N/A Task Status of Volume vol_84337b6825c0eb3d7a0e6008b65dd757 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_9e6cad52d8a8e2e7f8febe2709ef253a Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_29 88103500386566a0ef4dd3fa69e429/brick N/A N/A N N/A Task Status of Volume vol_9e6cad52d8a8e2e7f8febe2709ef253a ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_a720850474f6ce7ae6c57dcc60284b1f Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_03 4310c050aa134b254316068472b4cc/brick N/A N/A N N/A Task Status of Volume vol_a720850474f6ce7ae6c57dcc60284b1f ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_c98a28cd587883dc2882c00695b02d52 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_38 41cba307728c0bd2a66a1429160112/brick N/A N/A N N/A Task Status of Volume vol_c98a28cd587883dc2882c00695b02d52 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_ced2ac693a19d4ae53af897eaf13bd86 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_c6 4cb733906d43c5101044898eac8a35/brick N/A N/A N N/A Task Status of Volume vol_ced2ac693a19d4ae53af897eaf13bd86 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_dcece16823bead8503333ef11c022775 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_76 bd80272c57164663bec3b1c9750366/brick N/A N/A N N/A Task Status of Volume vol_dcece16823bead8503333ef11c022775 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_e6cbcf7bcb912d6c9725f3390f96b4b3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_63 ec19a814ece3152021772f71ddbd92/brick N/A N/A N N/A Task Status of Volume vol_e6cbcf7bcb912d6c9725f3390f96b4b3 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_eaa27e3100f78bff42ff337f163fee0f Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_38 7ecde606556b9d25487167b02e1e6b/brick N/A N/A N N/A Task Status of Volume vol_eaa27e3100f78bff42ff337f163fee0f ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vol_ff040fc48f8bd16727423b59ac7244c6 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.10.168.25:/var/lib/heketi/mounts/v g_c197878af606e71a874ad28e3bd7e4e1/brick_21 2c67914837cef8a927922ee63c7ee7/brick N/A N/A N N/A Task Status of Volume vol_ff040fc48f8bd16727423b59ac7244c6 ------------------------------------------------------------------------------ There are no active volume tasks My heketi volume info: gluster> volume info heketidbstorage Volume Name: heketidbstorage Type: Distribute Volume ID: 34b897d0-0953-4f8f-9c5c-54e043e55d92 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: 10.10.168.25:/var/lib/heketi/mounts/vg_c197878af606e71a874ad28e3bd7e4e1/brick_a16f9f0374fe5db948a60a017a3f5e60/brick Options Reconfigured: user.heketi.id: 1d2400626dac780fce12e45a07494853 transport.address-family: inet nfs.disable: on Our gluster settings/volume options: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gluster-heketi selfLink: /apis/storage.k8s.io/v1/storageclasses/gluster-heketi parameters: gidMax: "50000" gidMin: "2000" resturl: http://10.233.35.158:8080 restuser: "null" restuserkey: "null" volumetype: "none" volumeoptions: cluster.post-op-delay-secs 0, performance.client-io-threads off, performance.open-behind off, performance.readdir-ahead off, performance.read-ahead off, performance.stat-prefetch off, performance.write-behind off, performance.io-cache off, cluster.consistent-metadata on, performance.quick-read off, performance.strict-o-direct on provisioner: kubernetes.io/glusterfs reclaimPolicy: Delete -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 17:36:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 17:36:11 +0000 Subject: [Bugs] [Bug 1709959] Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 --- Comment #3 from Jeff Bischoff --- Created attachment 1568579 --> https://bugzilla.redhat.com/attachment.cgi?id=1568579&action=edit Gluster logs -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 19:08:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 19:08:56 +0000 Subject: [Bugs] [Bug 1709959] Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 --- Comment #4 from Jeff Bischoff --- Looking at the glusterd.log, it seems like everything was running for over a day with no log messages, when suddenly we hit this: got disconnect from stale rpc on /var/lib/heketi/mounts/vg_c197878af606e71a874ad28e3bd7e4e1/brick_d0456279568a623a16a5508daa89b4d5/brick` Here's the context for that snippet. The lines from 05/06 were during brick startup, while the lines from 05/07 are when the problem started. ==== [2019-05-06 02:18:00.292652] I [glusterd-utils.c:6090:glusterd_brick_start] 0-management: starting a fresh brick process for brick /var/lib/heketi/mounts/vg_c197878af606e71a874ad28e3bd7e4e1/brick_d0456279568a623a16a5508daa89b4d5/brick The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.7/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 12 times between [2019-05-06 02:17:49.214270] and [2019-05-06 02:17:59.537241] [2019-05-06 02:18:00.474120] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_c197878af606e71a874ad28e3bd7e4e1/brick_d0456279568a623a16a5508daa89b4d5/brick on port 49169 [2019-05-06 02:18:00.477708] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2019-05-06 02:18:00.507596] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2019-05-06 02:18:00.507662] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: nfs service is stopped [2019-05-06 02:18:00.507682] I [MSGID: 106599] [glusterd-nfs-svc.c:82:glusterd_nfssvc_manager] 0-management: nfs/server.so xlator is not installed [2019-05-06 02:18:00.511313] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2019-05-06 02:18:00.511386] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: bitd service is stopped [2019-05-06 02:18:00.513396] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2019-05-06 02:18:00.513503] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: scrub service is stopped [2019-05-06 02:18:00.534304] I [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe2c9a) [0x7f795f17fc9a] -->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe2765) [0x7f795f17f765] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f79643180f5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh --volname=vol_d0a0dcf9903e236f68a3933c3060ec5a --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd [2019-05-06 02:18:00.582971] E [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe2c9a) [0x7f795f17fc9a] -->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe26c3) [0x7f795f17f6c3] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f79643180f5] ) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh --volname=vol_d0a0dcf9903e236f68a3933c3060ec5a --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd The message "W [MSGID: 101095] [xlator.c:452:xlator_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.7/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 76 times between [2019-05-06 02:16:52.212662] and [2019-05-06 02:17:58.606533] [2019-05-07 11:53:38.663362] I [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0x3a7a5) [0x7f795f0d77a5] -->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe2765) [0x7f795f17f765] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f79643180f5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh --volname=vol_d0a0dcf9903e236f68a3933c3060ec5a --last=no [2019-05-07 11:53:38.905338] E [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0x3a7a5) [0x7f795f0d77a5] -->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe26c3) [0x7f795f17f6c3] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f79643180f5] ) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh --volname=vol_d0a0dcf9903e236f68a3933c3060ec5a --last=no [2019-05-07 11:53:38.982785] I [MSGID: 106542] [glusterd-utils.c:8253:glusterd_brick_signal] 0-glusterd: sending signal 15 to brick with pid 8951 [2019-05-07 11:53:39.983244] I [MSGID: 106143] [glusterd-pmap.c:397:pmap_registry_remove] 0-pmap: removing brick /var/lib/heketi/mounts/vg_c197878af606e71a874ad28e3bd7e4e1/brick_d0456279568a623a16a5508daa89b4d5/brick on port 49169 [2019-05-07 11:53:39.984656] W [glusterd-handler.c:6124:__glusterd_brick_rpc_notify] 0-management: got disconnect from stale rpc on /var/lib/heketi/mounts/vg_c197878af606e71a874ad28e3bd7e4e1/brick_d0456279568a623a16a5508daa89b4d5/brick [2019-05-07 11:53:40.316466] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2019-05-07 11:53:40.316601] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: nfs service is stopped [2019-05-07 11:53:40.316644] I [MSGID: 106599] [glusterd-nfs-svc.c:82:glusterd_nfssvc_manager] 0-management: nfs/server.so xlator is not installed [2019-05-07 11:53:40.319650] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2019-05-07 11:53:40.319708] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: bitd service is stopped [2019-05-07 11:53:40.321091] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2019-05-07 11:53:40.321132] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: scrub service is stopped ==== What would cause it to go stale? What is actually going stale here? Where should I look next? I am using whatever is built-in to gluster-centos:latest image from dockerhub. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 19:09:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 19:09:21 +0000 Subject: [Bugs] [Bug 1709959] Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 --- Comment #5 from Jeff Bischoff --- Looking at the glusterd.log, it seems like everything was running for over a day with no log messages, when suddenly we hit this: got disconnect from stale rpc on /var/lib/heketi/mounts/vg_c197878af606e71a874ad28e3bd7e4e1/brick_d0456279568a623a16a5508daa89b4d5/brick` Here's the context for that snippet. The lines from 05/06 were during brick startup, while the lines from 05/07 are when the problem started. ==== [2019-05-06 02:18:00.292652] I [glusterd-utils.c:6090:glusterd_brick_start] 0-management: starting a fresh brick process for brick /var/lib/heketi/mounts/vg_c197878af606e71a874ad28e3bd7e4e1/brick_d0456279568a623a16a5508daa89b4d5/brick The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.7/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 12 times between [2019-05-06 02:17:49.214270] and [2019-05-06 02:17:59.537241] [2019-05-06 02:18:00.474120] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_c197878af606e71a874ad28e3bd7e4e1/brick_d0456279568a623a16a5508daa89b4d5/brick on port 49169 [2019-05-06 02:18:00.477708] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2019-05-06 02:18:00.507596] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2019-05-06 02:18:00.507662] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: nfs service is stopped [2019-05-06 02:18:00.507682] I [MSGID: 106599] [glusterd-nfs-svc.c:82:glusterd_nfssvc_manager] 0-management: nfs/server.so xlator is not installed [2019-05-06 02:18:00.511313] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2019-05-06 02:18:00.511386] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: bitd service is stopped [2019-05-06 02:18:00.513396] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2019-05-06 02:18:00.513503] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: scrub service is stopped [2019-05-06 02:18:00.534304] I [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe2c9a) [0x7f795f17fc9a] -->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe2765) [0x7f795f17f765] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f79643180f5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh --volname=vol_d0a0dcf9903e236f68a3933c3060ec5a --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd [2019-05-06 02:18:00.582971] E [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe2c9a) [0x7f795f17fc9a] -->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe26c3) [0x7f795f17f6c3] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f79643180f5] ) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh --volname=vol_d0a0dcf9903e236f68a3933c3060ec5a --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd The message "W [MSGID: 101095] [xlator.c:452:xlator_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.7/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 76 times between [2019-05-06 02:16:52.212662] and [2019-05-06 02:17:58.606533] [2019-05-07 11:53:38.663362] I [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0x3a7a5) [0x7f795f0d77a5] -->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe2765) [0x7f795f17f765] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f79643180f5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh --volname=vol_d0a0dcf9903e236f68a3933c3060ec5a --last=no [2019-05-07 11:53:38.905338] E [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0x3a7a5) [0x7f795f0d77a5] -->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe26c3) [0x7f795f17f6c3] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f79643180f5] ) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh --volname=vol_d0a0dcf9903e236f68a3933c3060ec5a --last=no [2019-05-07 11:53:38.982785] I [MSGID: 106542] [glusterd-utils.c:8253:glusterd_brick_signal] 0-glusterd: sending signal 15 to brick with pid 8951 [2019-05-07 11:53:39.983244] I [MSGID: 106143] [glusterd-pmap.c:397:pmap_registry_remove] 0-pmap: removing brick /var/lib/heketi/mounts/vg_c197878af606e71a874ad28e3bd7e4e1/brick_d0456279568a623a16a5508daa89b4d5/brick on port 49169 [2019-05-07 11:53:39.984656] W [glusterd-handler.c:6124:__glusterd_brick_rpc_notify] 0-management: got disconnect from stale rpc on /var/lib/heketi/mounts/vg_c197878af606e71a874ad28e3bd7e4e1/brick_d0456279568a623a16a5508daa89b4d5/brick [2019-05-07 11:53:40.316466] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2019-05-07 11:53:40.316601] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: nfs service is stopped [2019-05-07 11:53:40.316644] I [MSGID: 106599] [glusterd-nfs-svc.c:82:glusterd_nfssvc_manager] 0-management: nfs/server.so xlator is not installed [2019-05-07 11:53:40.319650] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2019-05-07 11:53:40.319708] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: bitd service is stopped [2019-05-07 11:53:40.321091] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2019-05-07 11:53:40.321132] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: scrub service is stopped ==== What would cause it to go stale? What is actually going stale here? Where should I look next? I am using whatever is built-in to gluster-centos:latest image from dockerhub. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 19:40:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 19:40:08 +0000 Subject: [Bugs] [Bug 1709959] Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 --- Comment #6 from Jeff Bischoff --- This is the version of Gluster that I am using on the gluster pod: # rpm -qa | grep gluster glusterfs-rdma-4.1.7-1.el7.x86_64 gluster-block-0.3-2.el7.x86_64 python2-gluster-4.1.7-1.el7.x86_64 centos-release-gluster41-1.0-3.el7.centos.noarch glusterfs-4.1.7-1.el7.x86_64 glusterfs-api-4.1.7-1.el7.x86_64 glusterfs-cli-4.1.7-1.el7.x86_64 glusterfs-geo-replication-4.1.7-1.el7.x86_64 glusterfs-libs-4.1.7-1.el7.x86_64 glusterfs-client-xlators-4.1.7-1.el7.x86_64 glusterfs-fuse-4.1.7-1.el7.x86_64 glusterfs-server-4.1.7-1.el7.x86_64 This is the version of gluster running on the Kubernetes node: $ rpm -qa | grep gluster glusterfs-libs-3.12.2-18.el7.x86_64 glusterfs-3.12.2-18.el7.x86_64 glusterfs-fuse-3.12.2-18.el7.x86_64 glusterfs-client-xlators-3.12.2-18.el7.x86_64 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 19:56:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 19:56:09 +0000 Subject: [Bugs] [Bug 1710054] New: Optimize the glustershd manager to send reconfigure Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710054 Bug ID: 1710054 Summary: Optimize the glustershd manager to send reconfigure Product: GlusterFS Version: mainline Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: rkavunga at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Traditionally all svc manager will execute process stop and followed by start each time when they called. But that is not required by shd, because they attach request implemented in the shd multiplex has the intelligent to check whether a detach is required prior to attaching the graph. So there is no need to send an explicit detach request if we are sure that the next call is an attach request Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Code reading 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 20:02:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 20:02:10 +0000 Subject: [Bugs] [Bug 1710054] Optimize the glustershd manager to send reconfigure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710054 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22729 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 14 20:02:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 14 May 2019 20:02:11 +0000 Subject: [Bugs] [Bug 1710054] Optimize the glustershd manager to send reconfigure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710054 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22729 (glusterd/shd: Optimize the glustershd manager to send reconfigure) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 02:47:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 02:47:53 +0000 Subject: [Bugs] [Bug 1710159] New: glusterd: While upgrading (3-node cluster) 'gluster v status' times out on node to be upgraded Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710159 Bug ID: 1710159 Summary: glusterd: While upgrading (3-node cluster) 'gluster v status' times out on node to be upgraded Product: GlusterFS Version: mainline Status: NEW Component: glusterd Keywords: Regression Severity: high Assignee: bugs at gluster.org Reporter: srakonde at redhat.com CC: amukherj at redhat.com, bmekala at redhat.com, bugs at gluster.org, rallan at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1707246 Blocks: 1696807 Target Milestone: --- Classification: Community Description of problem: ======================= While performing an in-service upgrade (for geo-rep) of a 3 node cluster, 2 nodes (from the master) upgraded successfully . The third node which was on a previous build (3.4.4), while running 'gluster v status' it times out. [root at dhcp41-155]# gluster v status Error : Request timed out Other gluster commands seem to be working on this node. The same command on the upgraded nodes work as expected: (as shown) [root at dhcp42-173 glusterfs]# gluster v status Status of volume: gluster_shared_storage Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.42.211:/var/lib/glusterd/ss_bri ck 49152 0 Y 9661 Brick 10.70.41.155:/var/lib/glusterd/ss_bri ck 49155 0 Y 16101 Brick dhcp42-173.lab.eng.blr.redhat.com:/va r/lib/glusterd/ss_brick 49152 0 Y 4718 Self-heal Daemon on localhost N/A N/A Y 4809 Self-heal Daemon on 10.70.41.155 N/A N/A Y 16524 Self-heal Daemon on 10.70.42.211 N/A N/A Y 9964 Task Status of Volume gluster_shared_storage ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: master Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.42.173:/rhs/brick1/b1 49153 0 Y 4748 Brick 10.70.42.211:/rhs/brick1/b2 49153 0 Y 9698 Brick 10.70.41.155:/rhs/brick1/b3 49152 0 Y 15867 Brick 10.70.42.173:/rhs/brick2/b4 49154 0 Y 4757 Brick 10.70.42.211:/rhs/brick2/b5 49154 0 Y 9707 Brick 10.70.41.155:/rhs/brick2/b6 49153 0 Y 15888 Brick 10.70.42.173:/rhs/brick3/b7 49155 0 Y 4764 Brick 10.70.42.211:/rhs/brick3/b8 49155 0 Y 9722 Brick 10.70.41.155:/rhs/brick3/b9 49154 0 Y 15909 Self-heal Daemon on localhost N/A N/A Y 4809 Self-heal Daemon on 10.70.42.211 N/A N/A Y 9964 Self-heal Daemon on 10.70.41.155 N/A N/A Y 16524 Task Status of Volume master Error messages in glusterd log (on the 3.4.4 node) are as follows: ------------------------------------------------------------------ [2019-05-07 06:29:06.143535] E [rpc-clnt.c:185:call_bail] 0-management: bailing out frame type(glusterd mgmt) op(--(4)) xid = 0x19 sent = 2019-05-07 06:18:58.538764. timeout = 600 for 10.70.42.173:24007 [2019-05-07 06:29:06.143630] E [MSGID: 106153] [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Commit failed on dhcp42-173.lab.eng.blr.redhat.com. Please check log file for details. [2019-05-07 06:29:06.144183] I [socket.c:3699:socket_submit_reply] 0-socket.management: not connected (priv->connected = -1) [2019-05-07 06:29:06.144234] E [rpcsvc.c:1573:rpcsvc_submit_generic] 0-rpc-service: failed to submit message (XID: 0x2, Program: GlusterD svc cli, ProgVers: 2, Proc: 27) to rpc-transport (socket.management) [2019-05-07 06:29:06.144279] E [MSGID: 106430] [glusterd-utils.c:560:glusterd_submit_reply] 0-glusterd: Reply submission failed [2019-05-07 06:42:46.327616] E [rpc-clnt.c:185:call_bail] 0-management: bailing out frame type(glusterd mgmt) op(--(4)) xid = 0x1b sent = 2019-05-07 06:32:44.342818. timeout = 600 for 10.70.42.173:24007 [2019-05-07 06:42:46.327901] E [MSGID: 106153] [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Commit failed on dhcp42-173.lab.eng.blr.redhat.com. Please check log file for details. [2019-05-07 06:42:50.328686] E [rpc-clnt.c:185:call_bail] 0-management: bailing out frame type(glusterd mgmt) op(--(4)) xid = 0x1a sent = 2019-05-07 06:32:44.342952. timeout = 600 for 10.70.42.211:24007 [2019-05-07 06:42:50.328839] E [MSGID: 106153] [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Commit failed on 10.70.42.211. Please check log file for details. [2019-05-07 06:42:50.329321] I [socket.c:3699:socket_submit_reply] 0-socket.management: not connected (priv->connected = -1) [2019-05-07 06:42:50.329356] E [rpcsvc.c:1573:rpcsvc_submit_generic] 0-rpc-service: failed to submit message (XID: 0x2, Program: GlusterD svc cli, ProgVers: 2, Proc: 27) to rpc-transport (socket.management) [2019-05-07 06:42:50.329402] E [MSGID: 106430] [glusterd-utils.c:560:glusterd_submit_reply] 0-glusterd: Reply submission failed On the upgraded node: (glusterd log) ---------------------- [2019-05-07 06:18:58.535711] E [glusterd-op-sm.c:8193:glusterd_op_sm] (-->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x24c8e) [0x7f6ae684ec8e] -->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x1d05e) [0x7f6ae684705e] -->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x444ff) [0x7f6ae686e4ff] ) 0-management: Unable to get transaction opinfo for transaction ID :bc0a5ca5-f3a1-4c27-b263-d2d34289cbe3 [2019-05-07 06:32:44.339866] E [glusterd-op-sm.c:8193:glusterd_op_sm] (-->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x24c8e) [0x7f6ae684ec8e] -->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x1d05e) [0x7f6ae684705e] -->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x444ff) [0x7f6ae686e4ff] ) 0-management: Unable to get transaction opinfo for transaction ID :949c5d20-fb42-4d4d-8175-3e3dc158d310 [2019-05-07 06:43:02.694136] E [glusterd-op-sm.c:8193:glusterd_op_sm] (-->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x24c8e) [0x7f6ae684ec8e] -->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x1d05e) [0x7f6ae684705e] -->/usr/lib64/glusterfs/6.0/xlator/mgmt/glusterd.so(+0x444ff) [0x7f6ae686e4ff] ) 0-management: Unable to get transaction opinfo for transaction ID :d150e923-3749-439a-ae3a-82ce997c8608 Version-Release number of selected component (if applicable): ============================================================== UPGRADED NODE ==> [root at dhcp42-173]# rpm -qa | grep gluster python2-gluster-6.0-2.el7rhgs.x86_64 vdsm-gluster-4.19.43-2.3.el7rhgs.noarch libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.7.x86_64 glusterfs-cli-6.0-2.el7rhgs.x86_64 glusterfs-6.0-2.el7rhgs.x86_64 glusterfs-fuse-6.0-2.el7rhgs.x86_64 glusterfs-geo-replication-6.0-2.el7rhgs.x86_64 gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64 glusterfs-libs-6.0-2.el7rhgs.x86_64 glusterfs-api-6.0-2.el7rhgs.x86_64 glusterfs-events-6.0-2.el7rhgs.x86_64 tendrl-gluster-integration-1.6.3-12.el7rhgs.noarch gluster-nagios-common-0.2.4-1.el7rhgs.noarch glusterfs-server-6.0-2.el7rhgs.x86_64 glusterfs-client-xlators-6.0-2.el7rhgs.x86_64 glusterfs-rdma-6.0-2.el7rhgs.x86_64 NODE TO BE UPGRADED ==> [root at dhcp41-155]# rpm -qa | grep gluster glusterfs-events-3.12.2-47.el7rhgs.x86_64 glusterfs-rdma-3.12.2-47.el7rhgs.x86_64 vdsm-gluster-4.19.43-2.3.el7rhgs.noarch libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.7.x86_64 glusterfs-cli-3.12.2-47.el7rhgs.x86_64 glusterfs-fuse-3.12.2-47.el7rhgs.x86_64 glusterfs-server-3.12.2-47.el7rhgs.x86_64 gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64 glusterfs-libs-3.12.2-47.el7rhgs.x86_64 glusterfs-3.12.2-47.el7rhgs.x86_64 glusterfs-geo-replication-3.12.2-47.el7rhgs.x86_64 tendrl-gluster-integration-1.6.3-12.el7rhgs.noarch glusterfs-client-xlators-3.12.2-47.el7rhgs.x86_64 glusterfs-api-3.12.2-47.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch python2-gluster-3.12.2-47.el7rhgs.x86_64 How reproducible: ================= 1/1 Steps to Reproduce: ==================== In-service upgrade (geo-rep) Actual results: =============== While performing an in-service upgrade on a 3 node cluster, the third node which was on a previous build (3.4.4), while running 'gluster v status' it times out. Expected results: ================= 'gluster v status' should not time out Additional info: =============== In-service upgrade was still in progress and hence cluster.op-version is still on the previous op-version. The cluster is essentially in the middle of an in-service upgrade scenario. --- Additional comment from Rochelle on 2019-05-07 13:00:30 IST --- I've sent a mail with the access and credentials to my systems. --- Additional comment from Rochelle on 2019-05-07 14:25:32 IST --- pstack outputs from each node: Node 1: (Upgraded node) ----------------------- [root at dhcp42-173 ~]# pstack 4457 Thread 9 (Thread 0x7f6ae9d10700 (LWP 4458)): #0 0x00007f6af1578e3d in nanosleep () from /lib64/libpthread.so.0 #1 0x00007f6af27421c6 in gf_timer_proc (data=0x563bd3637ca0) at timer.c:194 #2 0x00007f6af1571dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f6af0e38ead in clone () from /lib64/libc.so.6 Thread 8 (Thread 0x7f6ae950f700 (LWP 4459)): #0 0x00007f6af1579361 in sigwait () from /lib64/libpthread.so.0 #1 0x0000563bd286143b in glusterfs_sigwaiter (arg=) at glusterfsd.c:2370 #2 0x00007f6af1571dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f6af0e38ead in clone () from /lib64/libc.so.6 Thread 7 (Thread 0x7f6ae8d0e700 (LWP 4460)): #0 0x00007f6af0dffe2d in nanosleep () from /lib64/libc.so.6 #1 0x00007f6af0dffcc4 in sleep () from /lib64/libc.so.6 #2 0x00007f6af275f5bd in pool_sweeper (arg=) at mem-pool.c:473 #3 0x00007f6af1571dd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f6af0e38ead in clone () from /lib64/libc.so.6 Thread 6 (Thread 0x7f6ae850d700 (LWP 4461)): #0 0x00007f6af1575d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f6af2774800 in syncenv_task (proc=proc at entry=0x563bd3638430) at syncop.c:612 #2 0x00007f6af27756b0 in syncenv_processor (thdata=0x563bd3638430) at syncop.c:679 #3 0x00007f6af1571dd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f6af0e38ead in clone () from /lib64/libc.so.6 Thread 5 (Thread 0x7f6ae7d0c700 (LWP 4462)): #0 0x00007f6af1575d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f6af2774800 in syncenv_task (proc=proc at entry=0x563bd36387f0) at syncop.c:612 #2 0x00007f6af27756b0 in syncenv_processor (thdata=0x563bd36387f0) at syncop.c:679 #3 0x00007f6af1571dd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f6af0e38ead in clone () from /lib64/libc.so.6 Thread 4 (Thread 0x7f6ae750b700 (LWP 4463)): #0 0x00007f6af0e2ff73 in select () from /lib64/libc.so.6 #1 0x00007f6af27b37f4 in runner (arg=0x563bd363c590) at ../../contrib/timer-wheel/timer-wheel.c:186 #2 0x00007f6af1571dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f6af0e38ead in clone () from /lib64/libc.so.6 Thread 3 (Thread 0x7f6ae3b28700 (LWP 4654)): #0 0x00007f6af1575965 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f6ae6917f2b in hooks_worker (args=) at glusterd-hooks.c:527 #2 0x00007f6af1571dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f6af0e38ead in clone () from /lib64/libc.so.6 Thread 2 (Thread 0x7f6ae3327700 (LWP 4655)): #0 0x00007f6af0e39483 in epoll_wait () from /lib64/libc.so.6 #1 0x00007f6af2799050 in event_dispatch_epoll_worker (data=0x563bd3759870) at event-epoll.c:751 #2 0x00007f6af1571dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f6af0e38ead in clone () from /lib64/libc.so.6 Thread 1 (Thread 0x7f6af2c31780 (LWP 4457)): #0 0x00007f6af1572f47 in pthread_join () from /lib64/libpthread.so.0 #1 0x00007f6af2798468 in event_dispatch_epoll (event_pool=0x563bd362f5b0) at event-epoll.c:846 #2 0x0000563bd285d9b5 in main (argc=5, argv=) at glusterfsd.c:2866 Node 2 : (upgraded node) ------------------------ [root at dhcp42-211 ~]# pstack 4493 Thread 9 (Thread 0x7f9f5c2c2700 (LWP 4495)): #0 0x00007f9f63b2ae3d in nanosleep () from /lib64/libpthread.so.0 #1 0x00007f9f64cf41c6 in gf_timer_proc (data=0x562895ff8ca0) at timer.c:194 #2 0x00007f9f63b23dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f9f633eaead in clone () from /lib64/libc.so.6 Thread 8 (Thread 0x7f9f5bac1700 (LWP 4496)): #0 0x00007f9f63b2b361 in sigwait () from /lib64/libpthread.so.0 #1 0x00005628944ba43b in glusterfs_sigwaiter (arg=) at glusterfsd.c:2370 #2 0x00007f9f63b23dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f9f633eaead in clone () from /lib64/libc.so.6 Thread 7 (Thread 0x7f9f5b2c0700 (LWP 4497)): #0 0x00007f9f633b1e2d in nanosleep () from /lib64/libc.so.6 #1 0x00007f9f633b1cc4 in sleep () from /lib64/libc.so.6 #2 0x00007f9f64d115bd in pool_sweeper (arg=) at mem-pool.c:473 #3 0x00007f9f63b23dd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f9f633eaead in clone () from /lib64/libc.so.6 Thread 6 (Thread 0x7f9f5aabf700 (LWP 4498)): #0 0x00007f9f63b27d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f9f64d26800 in syncenv_task (proc=proc at entry=0x562895ff9430) at syncop.c:612 #2 0x00007f9f64d276b0 in syncenv_processor (thdata=0x562895ff9430) at syncop.c:679 #3 0x00007f9f63b23dd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f9f633eaead in clone () from /lib64/libc.so.6 Thread 5 (Thread 0x7f9f5a2be700 (LWP 4499)): #0 0x00007f9f63b27d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f9f64d26800 in syncenv_task (proc=proc at entry=0x562895ff97f0) at syncop.c:612 #2 0x00007f9f64d276b0 in syncenv_processor (thdata=0x562895ff97f0) at syncop.c:679 #3 0x00007f9f63b23dd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f9f633eaead in clone () from /lib64/libc.so.6 Thread 4 (Thread 0x7f9f59abd700 (LWP 4500)): #0 0x00007f9f633e1f73 in select () from /lib64/libc.so.6 #1 0x00007f9f64d657f4 in runner (arg=0x562895ffd590) at ../../contrib/timer-wheel/timer-wheel.c:186 #2 0x00007f9f63b23dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f9f633eaead in clone () from /lib64/libc.so.6 Thread 3 (Thread 0x7f9f560da700 (LWP 4693)): #0 0x00007f9f63b27965 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f9f58ec9f2b in hooks_worker (args=) at glusterd-hooks.c:527 #2 0x00007f9f63b23dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f9f633eaead in clone () from /lib64/libc.so.6 Thread 2 (Thread 0x7f9f558d9700 (LWP 4694)): #0 0x00007f9f633eb483 in epoll_wait () from /lib64/libc.so.6 #1 0x00007f9f64d4b050 in event_dispatch_epoll_worker (data=0x562896119ae0) at event-epoll.c:751 #2 0x00007f9f63b23dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f9f633eaead in clone () from /lib64/libc.so.6 Thread 1 (Thread 0x7f9f651e3780 (LWP 4493)): #0 0x00007f9f63b24f47 in pthread_join () from /lib64/libpthread.so.0 #1 0x00007f9f64d4a468 in event_dispatch_epoll (event_pool=0x562895ff05b0) at event-epoll.c:846 #2 0x00005628944b69b5 in main (argc=5, argv=) at glusterfsd.c:2866 Node 3 (Node to be upgraded) ----------------------------- [root at dhcp41-155 yum.repos.d]# pstack 3942 Thread 8 (Thread 0x7f468c4a2700 (LWP 3943)): #0 0x00007f4693d0ae3d in nanosleep () from /lib64/libpthread.so.0 #1 0x00007f4694ed5f86 in gf_timer_proc (data=0x556b28add800) at timer.c:174 #2 0x00007f4693d03dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f46935caead in clone () from /lib64/libc.so.6 Thread 7 (Thread 0x7f468bca1700 (LWP 3944)): #0 0x00007f4693d0b361 in sigwait () from /lib64/libpthread.so.0 #1 0x0000556b26f90c7b in glusterfs_sigwaiter (arg=) at glusterfsd.c:2242 #2 0x00007f4693d03dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f46935caead in clone () from /lib64/libc.so.6 Thread 6 (Thread 0x7f468b4a0700 (LWP 3945)): #0 0x00007f4693591e2d in nanosleep () from /lib64/libc.so.6 #1 0x00007f4693591cc4 in sleep () from /lib64/libc.so.6 #2 0x00007f4694ef0b9d in pool_sweeper (arg=) at mem-pool.c:470 #3 0x00007f4693d03dd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f46935caead in clone () from /lib64/libc.so.6 Thread 5 (Thread 0x7f468ac9f700 (LWP 3946)): #0 0x00007f4693d07d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f4694f03dd8 in syncenv_task (proc=proc at entry=0x556b28ade020) at syncop.c:603 #2 0x00007f4694f04ca0 in syncenv_processor (thdata=0x556b28ade020) at syncop.c:695 #3 0x00007f4693d03dd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f46935caead in clone () from /lib64/libc.so.6 Thread 4 (Thread 0x7f468a49e700 (LWP 3947)): #0 0x00007f4693d07d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f4694f03dd8 in syncenv_task (proc=proc at entry=0x556b28ade3e0) at syncop.c:603 #2 0x00007f4694f04ca0 in syncenv_processor (thdata=0x556b28ade3e0) at syncop.c:695 #3 0x00007f4693d03dd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f46935caead in clone () from /lib64/libc.so.6 Thread 3 (Thread 0x7f46852f1700 (LWP 4320)): #0 0x00007f4693d07965 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f46898b783b in hooks_worker (args=) at glusterd-hooks.c:529 #2 0x00007f4693d03dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f46935caead in clone () from /lib64/libc.so.6 Thread 2 (Thread 0x7f4684af0700 (LWP 4321)): #0 0x00007f46935cb483 in epoll_wait () from /lib64/libc.so.6 #1 0x00007f4694f266e8 in event_dispatch_epoll_worker (data=0x556b28af8780) at event-epoll.c:749 #2 0x00007f4693d03dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f46935caead in clone () from /lib64/libc.so.6 Thread 1 (Thread 0x7f46953ad780 (LWP 3942)): #0 0x00007f4693d04f47 in pthread_join () from /lib64/libpthread.so.0 #1 0x00007f4694f26f78 in event_dispatch_epoll (event_pool=0x556b28ad5a70) at event-epoll.c:846 #2 0x0000556b26f8d538 in main (argc=5, argv=) at glusterfsd.c:2692 --- Additional comment from Atin Mukherjee on 2019-05-07 14:42:05 IST --- I don't see any threads being stuck. Until and unless this problem is reproducible, nothing can be done here. Does the volume status still times out? If so, a gdb session can be used and the problem can be debugged. --- Additional comment from Rochelle on 2019-05-08 10:04:12 IST --- Yes Atin, the issue still persists. I have sent the credentials of the systems in an email. --- Additional comment from Rochelle on 2019-05-08 11:16:09 IST --- Hit this issue again but differently. This time on RHGS 3.5 nodes 1. Ran the gdeploy conf file to create a geo-rep session. 2. Once the run was successful, I ran a 'gluster v geo-rep status' on the master node (to see if creation was successfull)and this timed out as shown below: [root at dhcp42-131 ~]# gluster v geo-rep status Error : Request timed out geo-replication command failed 3. When I ran the full (specific) ge-rep status, it worked as expected. [root at dhcp42-131 ~]# gluster v geo-replication master 10.70.42.250::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ----------------------------------------------------------------------------------------------------------------------------------------------------- 10.70.42.131 master /rhs/brick1/b1 root 10.70.42.250::slave 10.70.42.250 Active Changelog Crawl 2019-05-08 05:37:20 10.70.42.131 master /rhs/brick2/b4 root 10.70.42.250::slave 10.70.42.250 Active Changelog Crawl 2019-05-08 05:37:20 10.70.42.131 master /rhs/brick2/b7 root 10.70.42.250::slave 10.70.42.250 Active Changelog Crawl 2019-05-08 05:37:20 10.70.42.255 master /rhs/brick1/b3 root 10.70.42.250::slave 10.70.43.245 Passive N/A N/A 10.70.42.255 master /rhs/brick2/b6 root 10.70.42.250::slave 10.70.43.245 Passive N/A N/A 10.70.42.255 master /rhs/brick2/b9 root 10.70.42.250::slave 10.70.43.245 Passive N/A N/A 10.70.42.14 master /rhs/brick1/b2 root 10.70.42.250::slave 10.70.41.224 Passive N/A N/A 10.70.42.14 master /rhs/brick2/b5 root 10.70.42.250::slave 10.70.41.224 Passive N/A N/A 10.70.42.14 master /rhs/brick2/b8 root 10.70.42.250::slave 10.70.41.224 Passive N/A N/A --- Additional comment from Bala Konda Reddy M on 2019-05-09 11:27:14 IST --- Atin/Sanju, While upgarding my cluster(6 nodes), I am able to hit the issue. Upgraded first node and checked volume status and heal info working fine on upgraded node. On node(2nd node) which is still in 3.4.4, gluster volume status is in hung state. Regards, Bala Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1707246 [Bug 1707246] [glusterd]: While upgrading (3-node cluster) 'gluster v status' times out on node to be upgraded -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 02:51:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 02:51:25 +0000 Subject: [Bugs] [Bug 1710159] glusterd: While upgrading (3-node cluster) 'gluster v status' times out on node to be upgraded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710159 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22730 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 02:51:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 02:51:26 +0000 Subject: [Bugs] [Bug 1710159] glusterd: While upgrading (3-node cluster) 'gluster v status' times out on node to be upgraded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710159 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22730 (glusterd: add an op-version check) posted (#1) for review on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 03:50:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 03:50:35 -0000 Subject: [Bugs] [Bug 1686009] gluster fuse crashed with segmentation fault possibly due to dentry not found In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686009 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|high |medium Status|POST |NEW Assignee|atumball at redhat.com |bugs at gluster.org -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 15 04:17:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 04:17:57 +0000 Subject: [Bugs] [Bug 1709130] thin-arbiter lock release fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709130 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22731 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 15 04:17:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 04:17:58 +0000 Subject: [Bugs] [Bug 1709130] thin-arbiter lock release fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709130 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22731 (afr: thin-arbiter lock release fixes) posted (#1) for review on release-6 by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 15 05:32:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 05:32:15 +0000 Subject: [Bugs] [Bug 1710159] glusterd: While upgrading (3-node cluster) 'gluster v status' times out on node to be upgraded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710159 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1696807 |1707246 Depends On|1707246 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1707246 [Bug 1707246] [glusterd]: While upgrading (3-node cluster) 'gluster v status' times out on node to be upgraded -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 05:34:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 05:34:33 +0000 Subject: [Bugs] [Bug 1708926] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708926 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(rkavunga at redhat.c | |om) --- Comment #4 from Pranith Kumar K --- (In reply to Mohammed Rafi KC from comment #3) > Stack trace of thread 30877: > #0 0x0000000000406a07 cleanup_and_exit (glusterfsd) > #1 0x0000000000406b5d glusterfs_sigwaiter (glusterfsd) > #2 0x00007f51000cd58e start_thread (libpthread.so.0) > #3 0x00007f50ffd1d683 __clone (libc.so.6) > > Stack trace of thread 30879: > #0 0x00007f51000d3a7a futex_abstimed_wait_cancelable > (libpthread.so.0) > #1 0x00007f51003b8616 syncenv_task (libglusterfs.so.0) > #2 0x00007f51003b9240 syncenv_processor (libglusterfs.so.0) > #3 0x00007f51000cd58e start_thread (libpthread.so.0) > #4 0x00007f50ffd1d683 __clone (libc.so.6) > > Stack trace of thread 30881: > #0 0x00007f50ffd14cdf __GI___select (libc.so.6) > #1 0x00007f51003ef1cd runner (libglusterfs.so.0) > #2 0x00007f51000cd58e start_thread (libpthread.so.0) > #3 0x00007f50ffd1d683 __clone (libc.so.6) > > Stack trace of thread 30880: > #0 0x00007f51000d3a7a futex_abstimed_wait_cancelable > (libpthread.so.0) > #1 0x00007f51003b8616 syncenv_task (libglusterfs.so.0) > #2 0x00007f51003b9240 syncenv_processor (libglusterfs.so.0) > #3 0x00007f51000cd58e start_thread (libpthread.so.0) > #4 0x00007f50ffd1d683 __clone (libc.so.6) > > Stack trace of thread 30876: > #0 0x00007f51000d7500 __GI___nanosleep (libpthread.so.0) > #1 0x00007f510038a346 gf_timer_proc (libglusterfs.so.0) > #2 0x00007f51000cd58e start_thread (libpthread.so.0) > #3 0x00007f50ffd1d683 __clone (libc.so.6) > > Stack trace of thread 30882: > #0 0x00007f50ffd1e06e epoll_ctl (libc.so.6) > #1 0x00007f51003d931e event_handled_epoll > (libglusterfs.so.0) > #2 0x00007f50eed9a781 socket_event_poll_in (socket.so) > #3 0x00007f51003d8c9b event_dispatch_epoll_handler > (libglusterfs.so.0) > #4 0x00007f51000cd58e start_thread (libpthread.so.0) > #5 0x00007f50ffd1d683 __clone (libc.so.6) > > Stack trace of thread 30875: > #0 0x00007f51000cea6d __GI___pthread_timedjoin_ex > (libpthread.so.0) > #1 0x00007f51003d8387 event_dispatch_epoll > (libglusterfs.so.0) > #2 0x0000000000406592 main (glusterfsd) > #3 0x00007f50ffc44413 __libc_start_main (libc.so.6) > #4 0x00000000004067de _start (glusterfsd) > > Stack trace of thread 30878: > #0 0x00007f50ffce97f8 __GI___nanosleep (libc.so.6) > #1 0x00007f50ffce96fe __sleep (libc.so.6) > #2 0x00007f51003a4f5a pool_sweeper (libglusterfs.so.0) > #3 0x00007f51000cd58e start_thread (libpthread.so.0) > #4 0x00007f50ffd1d683 __clone (libc.so.6) > > Stack trace of thread 30883: > #0 0x00007f51000d6b8d __lll_lock_wait (libpthread.so.0) > #1 0x00007f51000cfda9 __GI___pthread_mutex_lock > (libpthread.so.0) > #2 0x00007f510037cd1f _gf_msg_plain_internal > (libglusterfs.so.0) > #3 0x00007f510037ceb3 _gf_msg_plain (libglusterfs.so.0) > #4 0x00007f5100382d43 gf_log_dump_graph (libglusterfs.so.0) > #5 0x00007f51003b514f glusterfs_process_svc_attach_volfp > (libglusterfs.so.0) > #6 0x000000000040b16d mgmt_process_volfile (glusterfsd) > #7 0x0000000000410792 mgmt_getspec_cbk (glusterfsd) > #8 0x00007f51003256b1 rpc_clnt_handle_reply (libgfrpc.so.0) > #9 0x00007f5100325a53 rpc_clnt_notify (libgfrpc.so.0) > #10 0x00007f5100322973 rpc_transport_notify (libgfrpc.so.0) > #11 0x00007f50eed9a45c socket_event_poll_in (socket.so) > #12 0x00007f51003d8c9b event_dispatch_epoll_handler > (libglusterfs.so.0) > #13 0x00007f51000cd58e start_thread (libpthread.so.0) > #14 0x00007f50ffd1d683 __clone (libc.so.6) Was graph->active NULL? What lead to the crash? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 06:01:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 06:01:17 +0000 Subject: [Bugs] [Bug 1707671] Cronjob of feeding gluster blogs from different account into planet gluster isn't working In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707671 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com Severity|unspecified |urgent -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 06:13:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 06:13:49 +0000 Subject: [Bugs] [Bug 1710159] glusterd: While upgrading (3-node cluster) 'gluster v status' times out on node to be upgraded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710159 --- Comment #2 from Sanju --- Root cause: With commit 34e010d64, we have added some conditions to set txn-opinfo to avoid the memory leak in txn-opinfo object. But, in a heterogeneous cluster the upgraded and non-upgraded nodes are following different conditions to set txn-opinfo. This is leading the get-txn-opinfo operation to fail and eventually the process hungs. [root at server2 glusterfs]# git show 34e010d64 commit 34e010d64905b7387de57840d3fb16a326853c9b Author: Atin Mukherjee Date: Mon Mar 18 16:08:04 2019 +0530 glusterd: fix txn-id mem leak This commit ensures the following: 1. Don't send commit op request to the remote nodes when gluster v status all is executed as for the status all transaction the local commit gets the name of the volumes and remote commit ops are technically a no-op. So no need for additional rpc requests. 2. In op state machine flow, if the transaction is in staged state and op_info.skip_locking is true, then no need to set the txn id in the priv->glusterd_txn_opinfo dictionary which never gets freed. Fixes: bz#1691164 Change-Id: Ib6a9300ea29633f501abac2ba53fb72ff648c822 Signed-off-by: Atin Mukherjee diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c b/xlators/mgmt/glusterd/src/glusterd-op-sm.c index 6495a9d88..84c34f1fe 100644 --- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c +++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c @@ -5652,6 +5652,9 @@ glusterd_op_ac_stage_op(glusterd_op_sm_event_t *event, void *ctx) dict_t *dict = NULL; xlator_t *this = NULL; uuid_t *txn_id = NULL; + glusterd_op_info_t txn_op_info = { + {0}, + }; this = THIS; GF_ASSERT(this); @@ -5686,6 +5689,7 @@ glusterd_op_ac_stage_op(glusterd_op_sm_event_t *event, void *ctx) ret = -1; goto out; } + ret = glusterd_get_txn_opinfo(&event->txn_id, &txn_op_info); ret = dict_set_bin(rsp_dict, "transaction_id", txn_id, sizeof(*txn_id)); if (ret) { @@ -5704,6 +5708,12 @@ out: gf_msg_debug(this->name, 0, "Returning with %d", ret); + /* for no volname transactions, the txn_opinfo needs to be cleaned up + * as there's no unlock event triggered + */ + if (txn_op_info.skip_locking) + ret = glusterd_clear_txn_opinfo(txn_id); + if (rsp_dict) dict_unref(rsp_dict); @@ -8159,12 +8169,16 @@ glusterd_op_sm() "Unable to clear " "transaction's opinfo"); } else { - ret = glusterd_set_txn_opinfo(&event->txn_id, &opinfo); - if (ret) - gf_msg(this->name, GF_LOG_ERROR, 0, - GD_MSG_TRANS_OPINFO_SET_FAIL, - "Unable to set " - "transaction's opinfo"); + if (!(event_type == GD_OP_EVENT_STAGE_OP && + opinfo.state.state == GD_OP_STATE_STAGED && + opinfo.skip_locking)) { <---- now, upgraded nodes will not set txn-opinfo when this condition is false, so the glusterd_get_txn_opinfo() after this is failing. previously we used to set txn-opinfo in every state of op-sm and glusterd_get_txn_opinfo will be called in every phase. We need to add an op-version check for this change. + ret = glusterd_set_txn_opinfo(&event->txn_id, &opinfo); + if (ret) + gf_msg(this->name, GF_LOG_ERROR, 0, + GD_MSG_TRANS_OPINFO_SET_FAIL, + "Unable to set " + "transaction's opinfo"); + } } glusterd_destroy_op_event_ctx(event); diff --git a/xlators/mgmt/glusterd/src/glusterd-syncop.c b/xlators/mgmt/glusterd/src/glusterd-syncop.c index 45b221c2e..9bab2cfd5 100644 --- a/xlators/mgmt/glusterd/src/glusterd-syncop.c +++ b/xlators/mgmt/glusterd/src/glusterd-syncop.c @@ -1392,6 +1392,8 @@ gd_commit_op_phase(glusterd_op_t op, dict_t *op_ctx, dict_t *req_dict, char *errstr = NULL; struct syncargs args = {0}; int type = GF_QUOTA_OPTION_TYPE_NONE; + uint32_t cmd = 0; + gf_boolean_t origin_glusterd = _gf_false; this = THIS; GF_ASSERT(this); @@ -1449,6 +1451,20 @@ commit_done: gd_syncargs_init(&args, op_ctx); synctask_barrier_init((&args)); peer_cnt = 0; + origin_glusterd = is_origin_glusterd(req_dict); + + if (op == GD_OP_STATUS_VOLUME) { + ret = dict_get_uint32(req_dict, "cmd", &cmd); + if (ret) + goto out; + + if (origin_glusterd) { + if ((cmd & GF_CLI_STATUS_ALL)) { + ret = 0; + goto out; + } + } + } RCU_READ_LOCK; cds_list_for_each_entry_rcu(peerinfo, &conf->peers, uuid_list) (END) Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 07:40:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 07:40:16 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22732 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 07:40:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 07:40:16 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #660 from Worker Ant --- REVIEW: https://review.gluster.org/22732 ([WIP][RFC]inode.c/h: small optimizations for inode_ctx_* funcs) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 10:37:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 10:37:17 +0000 Subject: [Bugs] [Bug 1709130] thin-arbiter lock release fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709130 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-15 10:37:17 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22731 (afr: thin-arbiter lock release fixes) merged (#1) on release-6 by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 15 10:37:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 10:37:40 +0000 Subject: [Bugs] [Bug 1709660] Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709660 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-15 10:37:40 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22721 (cluster/ec: Reopen shouldn't happen with O_TRUNC) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 10:57:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 10:57:43 +0000 Subject: [Bugs] [Bug 1707671] Cronjob of feeding gluster blogs from different account into planet gluster isn't working In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707671 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |dkhandel at redhat.com Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-15 10:57:43 --- Comment #1 from Deepshikha khandelwal --- It is fixed. I can see your blog there: https://planet.gluster.org/ For other blogs you need to update the feed.yml -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 12:43:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 12:43:00 +0000 Subject: [Bugs] [Bug 1710371] New: Minor improvements across codebase for performance gain Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710371 Bug ID: 1710371 Summary: Minor improvements across codebase for performance gain Product: GlusterFS Version: mainline Status: NEW Component: core Keywords: Performance, Tracking Severity: high Priority: high Assignee: bugs at gluster.org Reporter: atumball at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: There are many places in glusterfs code, where we can do minor improvements to handle CPU cycles better. * No need for memset(). * Initializing large array! * revisit locked regions, and see if it can be optimized. * Reduce call to library functions, instead cache some results. * other such minor improvements, which would help overall performance. Version-Release number of selected component (if applicable): master Expected results: With focus on such improvement, we expect better CPU utilization, which means better performance in the long run. Additional info: This bug can be a tracker bug, and hence this shouldn't be closed as long as we feel we have good performance. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 12:47:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 12:47:46 +0000 Subject: [Bugs] [Bug 1670031] performance regression seen with smallfile workload tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670031 --- Comment #18 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22242 (inode: reduce inode-path execution time) posted (#3) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 12:47:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 12:47:47 +0000 Subject: [Bugs] [Bug 1670031] performance regression seen with smallfile workload tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670031 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22242 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 12:47:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 12:47:49 +0000 Subject: [Bugs] [Bug 1710371] Minor improvements across codebase for performance gain In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710371 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22242 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 12:47:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 12:47:50 +0000 Subject: [Bugs] [Bug 1710371] Minor improvements across codebase for performance gain In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710371 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22242 (inode: reduce inode-path execution time) posted (#3) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 15 22:58:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 22:58:58 +0000 Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660225 turnerb changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ben at bttech.co.uk --- Comment #20 from turnerb --- I was curious if anyone ever got this resolved? I was running 4.1.7 and set up a geo-replica, this had the above issue with the renaming of files and directories. I have tired up grading to 4.1.8 and have now moved to 5.6 and best I have now is replicated renames of directories. Renaming of files still doesn't get replicated to the geo-replica volume. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 15 23:09:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 15 May 2019 23:09:33 +0000 Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660225 --- Comment #21 from asender at testlabs.com.au --- Some custom compiled RPMS are versioned 4.1.8-0.1.git... and contain the fixes. What a mess this project has become. Broken all versions. Official it will be in 4.1.9 but you can use the below RPMS - we are running it again in production now and appears to be ok. Make sure you update clients as well. [1] RPMs for 4.1 including the fix for el7: https://build.gluster.org/job/rpm-el7/3599/artifact/ -Adrian Sender -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 16 08:27:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 16 May 2019 08:27:43 +0000 Subject: [Bugs] [Bug 1660225] geo-rep does not replicate mv or rename of file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660225 --- Comment #22 from turnerb --- Thank you Adrian, appreciate the feedback. Unfortunatley that URL returns me a 404 error so cannot get that installed. I may just wait for the 4.1.9 release to go GA, seems based on the release cycle that it mught well be out next week. Unless you happen to have a copy of the RPM's that you can share? Thanks, Ben. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 16 08:32:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 16 May 2019 08:32:57 +0000 Subject: [Bugs] [Bug 1710744] New: [FUSE] Endpoint is not connected after "Found anomalies" error Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710744 Bug ID: 1710744 Summary: [FUSE] Endpoint is not connected after "Found anomalies" error Product: GlusterFS Version: 5 OS: Linux Status: NEW Component: fuse Severity: urgent Assignee: bugs at gluster.org Reporter: kompastver at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Fuse client crashed after it found anomalies. Or probably it's just a coincidence. Version-Release number of selected component (if applicable): v5.5 How reproducible: I've observed it only once after two months of uptime on a production cluster Steps to Reproduce: Actually, there were usual workload, and nothing special was done 1. setup replicated volume with two bricks 2. mount it via fuse client 3. use it till it crashes ?\_(?)_/? Actual results: Fuse mount crashed Expected results: Fuse mount works Additional info: Volume info: Volume Name: st Type: Replicate Volume ID: adfd1585-1f5c-42af-a195-af57889d951d Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: srv1:/vol3/ Brick2: srv2:/vol3/ Options Reconfigured: performance.read-ahead: off cluster.readdir-optimize: on client.event-threads: 2 server.event-threads: 16 network.ping-timeout: 20 cluster.data-self-heal-algorithm: full performance.io-thread-count: 16 performance.io-cache: on performance.cache-size: 1GB performance.quick-read: off transport.address-family: inet6 performance.readdir-ahead: off nfs.disable: on Fuse client log: [2019-05-15 01:46:03.521974] W [fuse-bridge.c:582:fuse_entry_cbk] 0-glusterfs-fuse: 179361305: MKDIR() /assets_cache/26B/280 => -1 (File exists) [2019-05-16 05:29:44.172704] W [fuse-bridge.c:582:fuse_entry_cbk] 0-glusterfs-fuse: 186813586: MKDIR() /assets_cache/320/0C0 => -1 (File exists) [2019-05-16 05:29:44.175393] I [MSGID: 109063] [dht-layout.c:659:dht_layout_normalize] 6-st3-dht: Found anomalies in /assets_cache/320/0C0 (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0 pending frames: pending frames: frame : type(1) op(UNLINK) frame : type(1) op(OPEN) frame : type(0) op(0) patchset: git://git.gluster.org/glusterfs.git signal received: 11 frame : type(1) op(UNLINK) frame : type(1) op(OPEN) frame : type(0) op(0) patchset: git://git.gluster.org/glusterfs.git signal received: 11 time of crash: 2019-05-16 05:58:43 configuration details: argp 1 backtrace 1 dlfcn 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 5.5 time of crash: 2019-05-16 05:58:43 configuration details: argp 1 backtrace 1 dlfcn 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 5.5 /lib64/libglusterfs.so.0(+0x26620)[0x7f4b559a4620] /lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7f4b559aebd4] /lib64/libc.so.6(+0x36280)[0x7f4b53dd7280] /lib64/libpthread.so.0(pthread_mutex_lock+0x0)[0x7f4b545d9c30] /lib64/libglusterfs.so.0(fd_unref+0x37)[0x7f4b559cd1e7] /usr/lib64/glusterfs/5.5/xlator/protocol/client.so(+0x17038)[0x7f4b47beb038] /usr/lib64/glusterfs/5.5/xlator/protocol/client.so(+0x721ed)[0x7f4b47c461ed] /lib64/libgfrpc.so.0(+0xf030)[0x7f4b55771030] /lib64/libgfrpc.so.0(+0xf403)[0x7f4b55771403] /lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f4b5576d2f3] /usr/lib64/glusterfs/5.5/rpc-transport/socket.so(+0xa106)[0x7f4b48d04106] /lib64/libglusterfs.so.0(+0x8aa89)[0x7f4b55a08a89] /lib64/libpthread.so.0(+0x7dd5)[0x7f4b545d7dd5] /lib64/libc.so.6(clone+0x6d)[0x7f4b53e9eead] --------- -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 16 09:14:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 16 May 2019 09:14:11 +0000 Subject: [Bugs] [Bug 1710744] [FUSE] Endpoint is not connected after "Found anomalies" error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710744 --- Comment #1 from Pavel Znamensky --- I have the coredump file, but due to sensitive information, I can send it directly to the developers only. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 16 10:09:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 16 May 2019 10:09:08 +0000 Subject: [Bugs] [Bug 1709653] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709653 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-16 10:09:08 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22720 (geo-rep: Convert gfid conflict resolutiong logs into debug) merged (#1) on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 16 11:59:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 16 May 2019 11:59:31 +0000 Subject: [Bugs] [Bug 1696136] gluster fuse mount crashed, when deleting 2T image file from oVirt Manager UI In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696136 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22507 (features/shard: Fix crash during background shard deletion in a specific case) merged (#6) on master by Krutika Dhananjay -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Thu May 16 13:17:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 16 May 2019 13:17:26 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22736 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 16 13:17:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 16 May 2019 13:17:28 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #661 from Worker Ant --- REVIEW: https://review.gluster.org/22736 (tests: change usleep() to sleep()) posted (#1) for review on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 16 19:21:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 16 May 2019 19:21:02 +0000 Subject: [Bugs] [Bug 1709959] Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 --- Comment #7 from Jeff Bischoff --- Update: Looking at the logs chronologically, I first see a failure in the brick and then a few seconds later the volume shuts down (we have only one brick per volume): >From the Brick log ------------------ [2019-05-08 13:48:33.642605] W [MSGID: 113075] [posix-helpers.c:1895:posix_fs_health_check] 0-heketidbstorage-posix: aio_write() on /var/lib/heketi/mounts/vg_c197878af606e71a874ad28e3bd7e4e1/brick_a16f9f0374fe5db948a60a017a3f5e60/brick/.glusterfs/health_check returned [Resource temporarily unavailable] [2019-05-08 13:48:33.749246] M [MSGID: 113075] [posix-helpers.c:1962:posix_health_check_thread_proc] 0-heketidbstorage-posix: health-check failed, going down [2019-05-08 13:48:34.000428] M [MSGID: 113075] [posix-helpers.c:1981:posix_health_check_thread_proc] 0-heketidbstorage-posix: still alive! -> SIGTERM [2019-05-08 13:49:04.597061] W [glusterfsd.c:1514:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dd5) [0x7f16fdd94dd5] -->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xe5) [0x556e53da2d65] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x556e53da2b8b] ) 0-: received signum (15), shutting down >From the GlusterD log --------------------- [2019-05-08 13:49:04.673536] I [MSGID: 106143] [glusterd-pmap.c:397:pmap_registry_remove] 0-pmap: removing brick /var/lib/heketi/mounts/vg_c197878af606e71a874ad28e3bd7e4e1/brick_a16f9f0374fe5db948a60a017a3f5e60/brick on port 49152 [2019-05-08 13:49:05.003848] W [socket.c:599:__socket_rwv] 0-management: readv on /var/run/gluster/fe4ac75011a4de0e.socket failed (No data available) This same pattern repeats for all the bricks/volumes. Most of them go offline within a second of the first one. The stragglers go offline within the next 30 minutes. My interpretation of these logs is that the socket gluster is using times out. Do I need to increase 'network.ping-timeout' or 'client.grace-timeout'to address this? What really boggles me is why the brick stays offline after the timeout. After all, it is only a "temporarily" unavailable resource. Shouldn't Gluster be able to recover from this error without user intervention? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 16 20:30:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 16 May 2019 20:30:17 +0000 Subject: [Bugs] [Bug 1709959] Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 --- Comment #8 from Jeff Bischoff --- In my last comment, I asked: "...why the brick stays offline after the timeout. After all, it is only a "temporarily" unavailable resource. Shouldn't Gluster be able to recover from this error without user intervention?" To answer my own question: this appears to be a feature, not a bug according to https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/brick-failure-detection/. "When a brick process detects that the underlaying storage is not responding anymore, the process will exit. There is no automated way that the brick process gets restarted, the sysadmin will need to fix the problem with the storage first." It's good to at least understand why it isn't coming back up. However, it seems strange to me that Gluster would choose to stop and stay off like this in the face of an apparently transient issue. What is the best approach to remedy this? Should I increase the timeouts... or even disable the health checker? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 05:13:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 05:13:46 +0000 Subject: [Bugs] [Bug 1702289] Promotion failed for a0afd3e3-0109-49b7-9b74-ba77bf653aba.11229 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1702289 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low CC| |atumball at redhat.com --- Comment #1 from Amar Tumballi --- Hi Petr, We would like to let you know that we are not actively working on tiering feature, and for your info, the feature is already deprecated in our latest 6.x versions. We recommend you to use dmcache or similar tiering options on brick itself to gain better performance using glusterfs on top of it. -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 06:48:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 06:48:05 +0000 Subject: [Bugs] [Bug 1687811] core dump generated while running the test ./tests/00-geo-rep/georep-basic-dr-rsync-arbiter.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687811 Vivek Das changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1711159 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 06:53:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 06:53:49 +0000 Subject: [Bugs] [Bug 1707731] [Upgrade] Config files are not upgraded to new version In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707731 Aravinda VK changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|avishwan at redhat.com |sacharya at redhat.com --- Comment #1 from Aravinda VK --- ## Reading Old format: Config file consists of a section named "__section_order__", read that to get the order of different section. Sort the section based on order(based on values), and prepare a dict with values updated from each section. For example: ``` [__section_order__] sec1=0 sec2=2 sec3=1 [sec1] log_level = INFO [sec2] log_level = DEBUG [sec3] log_level = ERROR ``` configs = {} for sec in sorded_sections(): for item_key, item_value in sec.items: configs[item_key] = item_value With this logic, `log_level` will have final value "DEBUG" ## Upgrade: During Geo-rep start(In gsyncd.py): - Read the session config and see it is a new format or old - If it is old format, read the config as explained above - Compare the configs collected and write to new config only if it is different from Default configs - Reload the new config To get the old format config, - Install old version of Glusterfs(<4) and create a geo-rep session. - Set some configurations in Geo-rep - Copy the config file for reference - Old parsing code can be referred here https://github.com/gluster/glusterfs/blob/release-3.13/geo-replication/syncdaemon/configinterface.py -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 17 07:48:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 07:48:35 +0000 Subject: [Bugs] [Bug 1709685] Geo-rep: Value of pending entry operations in detail status output is going up after each synchronization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709685 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-17 07:48:35 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22722 (geo-rep: Fix entries and metadata counters in geo-rep status) merged (#3) on release-6 by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 17 07:49:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 07:49:20 +0000 Subject: [Bugs] [Bug 1709737] geo-rep: Always uses rsync even with use_tarssh set to true In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709737 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-17 07:49:20 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22724 (geo-rep: Fix sync-method config) merged (#2) on release-6 by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 17 08:21:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 08:21:46 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #662 from Worker Ant --- REVIEW: https://review.gluster.org/22736 (tests: change usleep() to sleep()) merged (#1) on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 07:48:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 07:48:58 +0000 Subject: [Bugs] [Bug 1709734] Geo-rep: Data inconsistency while syncing heavy renames with constant destination name In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709734 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-17 07:48:58 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22723 (geo-rep: Fix rename with existing destination with same gfid) merged (#2) on release-6 by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 17 09:02:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 09:02:40 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22737 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 09:02:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 09:02:41 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1622 from Worker Ant --- REVIEW: https://review.gluster.org/22737 (glusterd: coverity fix) posted (#1) for review on master by Sanju Rakonde -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 10:23:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 10:23:52 +0000 Subject: [Bugs] [Bug 1198746] Volume passwords are visible to remote users In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1198746 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WORKSFORME Flags|needinfo?(atumball at redhat.c | |om) | Last Closed| |2019-05-17 10:23:52 --- Comment #2 from Amar Tumballi --- Checked the behavior: On a server node: ``` [root at server-node ~]# gluster system getspec demo1 | grep password option password 423b5c4c-3457-4b14-ab67-0d4668e35c8f ``` On a separate node, which is not part of the trusted network: ``` [root at fedora28 ~]# gluster --remote-host=192.168.121.1 system getspec demo1 | grep password ``` So, the behavior is proper, and I don't see any security threat due to this command with latest codebase. Marking as WORKSFORME with glusterfs-6.x. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 10:38:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 10:38:26 +0000 Subject: [Bugs] [Bug 1672480] Bugs Test Module tests failing on s390x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672480 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Status|POST |ASSIGNED Assignee|bugs at gluster.org |atumball at redhat.com Flags|needinfo?(atumball at redhat.c | |om) | --- Comment #64 from Amar Tumballi --- Will keep it in my name as I am yet to setup a team on this (and ARM). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 10:54:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 10:54:53 +0000 Subject: [Bugs] [Bug 1711240] [GNFS] gf_nfs_mt_inode_ctx serious memory leak In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711240 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Version|unspecified |mainline Component|glusterfs |nfs CC| |atumball at redhat.com, | |bugs at gluster.org Assignee|jthottan at redhat.com |bugs at gluster.org QA Contact|bmekala at redhat.com | Product|Red Hat Gluster Storage |GlusterFS -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 10:57:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 10:57:53 +0000 Subject: [Bugs] [Bug 1711240] [GNFS] gf_nfs_mt_inode_ctx serious memory leak In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711240 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22738 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 10:57:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 10:57:54 +0000 Subject: [Bugs] [Bug 1711240] [GNFS] gf_nfs_mt_inode_ctx serious memory leak In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711240 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22738 (inode: fix wrong loop count in __inode_ctx_free) posted (#1) for review on master by Xie Changlong -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 11:01:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 11:01:32 +0000 Subject: [Bugs] [Bug 1672480] Bugs Test Module tests failing on s390x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672480 --- Comment #65 from abhays --- (In reply to Amar Tumballi from comment #64) > Will keep it in my name as I am yet to setup a team on this (and ARM). Thanks for the reply @Amar. > And one query we have with respect to these failures whether they affect the > main functionality of Glusterfs or they can be ignored for now? > Please let us know. > > > Also, s390x systems have been added on the gluster-ci. Any updates regards > to that? @Amar,Could you please comment on this also? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 17 11:03:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 11:03:41 +0000 Subject: [Bugs] [Bug 1711240] [GNFS] gf_nfs_mt_inode_ctx serious memory leak In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711240 --- Comment #5 from Xie Changlong --- @Amar Tumballi test gnfs with master branch 836e5b6b, nfs_forget never call. It seems glusterfs-3.12.2-47.el7 also has this problem. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 11:19:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 11:19:00 +0000 Subject: [Bugs] [Bug 1711250] New: bulkvoldict thread is not handling all volumes while brick multiplex is enabled Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711250 Bug ID: 1711250 Summary: bulkvoldict thread is not handling all volumes while brick multiplex is enabled Product: GlusterFS Version: mainline Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bmekala at redhat.com, bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1711249 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1711249 +++ Description of problem: In commit ac70f66c5805e10b3a1072bd467918730c0aeeb4 one condition was missed to handle volumes by bulkvoldict thread so at the time of getting friend request from peer, glusterd is not sending all volumes updates to peers. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Setup 500 volumes(test1..test500 1x3) 2. Enable brick_multiplex 3. Stop glusterd on one node 4. Update "performance.readdir-ahead on" for volumes periodically like test1,test20,test40,test60,test80...test500 5. Start glusterd on 6. Wait 2 minutes to finish handshake and then check the value of performance.readdir-ahead for volumes (test1,test20,test40,....test500) The value should be sync by peer nodes Actual results: For some of the volumes value is not synced. Expected results: For all the volumes value should be synced Additional info: --- Additional comment from RHEL Product and Program Management on 2019-05-17 11:17:45 UTC --- This bug is automatically being proposed for the next minor release of Red Hat Gluster Storage by setting the release flag 'rhgs?3.5.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1711249 [Bug 1711249] bulkvoldict thread is not handling all volumes while brick multiplex is enabled -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 11:19:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 11:19:15 +0000 Subject: [Bugs] [Bug 1711250] bulkvoldict thread is not handling all volumes while brick multiplex is enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711250 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 12:47:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 12:47:36 +0000 Subject: [Bugs] [Bug 1711250] bulkvoldict thread is not handling all volumes while brick multiplex is enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711250 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22739 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 17 12:47:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 12:47:37 +0000 Subject: [Bugs] [Bug 1711250] bulkvoldict thread is not handling all volumes while brick multiplex is enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711250 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22739 (glusterd: bulkvoldict thread is not handling all volumes) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 17 13:04:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 13:04:15 +0000 Subject: [Bugs] [Bug 1711297] New: Optimize glusterd code to copy dictionary in handshake code path Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711297 Bug ID: 1711297 Summary: Optimize glusterd code to copy dictionary in handshake code path Product: GlusterFS Version: mainline Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bmekala at redhat.com, bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1711296 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1711296 [Bug 1711296] Optimize glusterd code to copy dictionary in handshake code path -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 13:04:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 13:04:36 +0000 Subject: [Bugs] [Bug 1711297] Optimize glusterd code to copy dictionary in handshake code path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711297 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 13:04:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 13:04:51 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22741 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 13:04:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 13:04:52 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1623 from Worker Ant --- REVIEW: https://review.gluster.org/22741 (across: coverity fixes) posted (#1) for review on master by Sheetal Pamecha -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 14:01:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 14:01:02 +0000 Subject: [Bugs] [Bug 1711297] Optimize glusterd code to copy dictionary in handshake code path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711297 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22742 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 17 14:01:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 14:01:03 +0000 Subject: [Bugs] [Bug 1711297] Optimize glusterd code to copy dictionary in handshake code path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711297 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22742 (glusterd: Optimize code to copy dictionary in handshake code path) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 17 17:05:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 17:05:38 +0000 Subject: [Bugs] [Bug 1711400] New: Dispersed volumes leave open file descriptors on nodes Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711400 Bug ID: 1711400 Summary: Dispersed volumes leave open file descriptors on nodes Product: GlusterFS Version: 4.1 Hardware: x86_64 OS: Linux Status: NEW Component: disperse Severity: medium Assignee: bugs at gluster.org Reporter: maclemming+redhat at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community We have a 5 node Gluster cluster running 4.1.7. The gluster service is running as a container on each host, and is mounted the `/srv/brick` directory on the host. We have several volumes that we have set up as dispersed volumes. The client for Gluster is a Kubernetes cluster. One of the apps we have running is a Devpi server. After a couple of days running the devpi server, we noticed that the Gluster servers were unresponsive. Trying to ssh to any of the nodes gave an error about too many files open. We eventually had to reboot all of the servers to recover them. The next day, we checked again, and saw that the glusterfs process that was responsible for the devpi volume had 3 million files open (as seen with the command `sudo lsof -a -p | wc -l`). Stopping the container did not free up the file descriptors. Only stopping and starting the volume would release the FDs. However, as soon as devpi starts again and serves files, then the open FDs start rising again. I was able to narrow down to when writing to files. Here are the replication steps: Create a Gluster dispersed volume: gluster volume create fd-test disperse 5 redundancy 2 srv1:/path srv2:/path srv3:/path srv4:/path srv5:/path gluster volume quota fd-test enable gluster volume quota fd-test limit-usage / 1GB Mount the volume on a host, and run the simple script in the Gluster volume: #!/bin/bash while [ 1 -eq 1 ] do echo "something\n" > file.txt sleep 1 done >From any one of the Gluster nodes, find the PID of the Gluster process for the volume, and run the commands to see the number of FDs (every 5 seconds): admin at gfs-01:~$ sudo lsof -a -p 11606 | wc -l 26 admin at gfs-01:~$ sudo lsof -a -p 11606 | wc -l 30 admin at gfs-01:~$ sudo lsof -a -p 11606 | wc -l 35 admin at gfs-01:~$ sudo lsof -a -p 11606 | wc -l 40 If you take out the sleep, the FDs will jump up by thousands every second. If you view the actual FDs without the `wc` command, they are almost all the same file: glusterfs 11606 root 1935w REG 8,17 18944 53215266 /srv/brick/fd-test/.glusterfs/2e/4c/2e4c7104-02c4-4ac9-b611-7290938a6e3f glusterfs 11606 root 1936w REG 8,17 18944 53215266 /srv/brick/fd-test/.glusterfs/2e/4c/2e4c7104-02c4-4ac9-b611-7290938a6e3f glusterfs 11606 root 1937w REG 8,17 18944 53215266 /srv/brick/fd-test/.glusterfs/2e/4c/2e4c7104-02c4-4ac9-b611-7290938a6e3f glusterfs 11606 root 1938w REG 8,17 18944 53215266 /srv/brick/fd-test/.glusterfs/2e/4c/2e4c7104-02c4-4ac9-b611-7290938a6e3f glusterfs 11606 root 1939w REG 8,17 18944 53215266 /srv/brick/fd-test/.glusterfs/2e/4c/2e4c7104-02c4-4ac9-b611-7290938a6e3f glusterfs 11606 root 1940w REG 8,17 18944 53215266 /srv/brick/fd-test/.glusterfs/2e/4c/2e4c7104-02c4-4ac9-b611-7290938a6e3f glusterfs 11606 root 1941w REG 8,17 18944 53215266 /srv/brick/fd-test/.glusterfs/2e/4c/2e4c7104-02c4-4ac9-b611-7290938a6e3f glusterfs 11606 root 1942w REG 8,17 18944 53215266 /srv/brick/fd-test/.glusterfs/2e/4c/2e4c7104-02c4-4ac9-b611-7290938a6e3f glusterfs 11606 root 1943w REG 8,17 18944 53215266 /srv/brick/fd-test/.glusterfs/2e/4c/2e4c7104-02c4-4ac9-b611-7290938a6e3f glusterfs 11606 root 1944w REG 8,17 18944 53215266 /srv/brick/fd-test/.glusterfs/2e/4c/2e4c7104-02c4-4ac9-b611-7290938a6e3f glusterfs 11606 root 1945w REG 8,17 18944 53215266 /srv/brick/fd-test/.glusterfs/2e/4c/2e4c7104-02c4-4ac9-b611-7290938a6e3f glusterfs 11606 root 1946w REG 8,17 18944 53215266 /srv/brick/fd-test/.glusterfs/2e/4c/2e4c7104-02c4-4ac9-b611-7290938a6e3f glusterfs 11606 root 1947w REG 8,17 18944 53215266 /srv/brick/fd-test/.glusterfs/2e/4c/2e4c7104-02c4-4ac9-b611-7290938a6e3f The container itself does not see any open FDs. It is only the Gluster host. We tried creating a replicated volume and moved the devpi data to the new volume, and it worked fine without leaving open FDs (constant 90 FDs open), so the problem appears to be just with dispersed mode. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 18:08:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 18:08:43 +0000 Subject: [Bugs] [Bug 1708926] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708926 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22743 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 17 18:08:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 17 May 2019 18:08:44 +0000 Subject: [Bugs] [Bug 1708926] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708926 --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22743 (afr/frame: Destroy frame after afr_selfheal_entry_granular) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 18 06:28:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:28:24 +0000 Subject: [Bugs] [Bug 1614275] Fix spurious failures in tests/bugs/ec/bug-1236065.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1614275 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |NEW CC| |atumball at redhat.com --- Comment #2 from Amar Tumballi --- Patch abandoned. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 18 06:29:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:29:36 +0000 Subject: [Bugs] [Bug 1610240] Mark tests/bugs/distribute/bug-1122443.t failing very often In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1610240 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |NOTABUG Last Closed| |2019-05-18 06:29:36 --- Comment #3 from Amar Tumballi --- This is fixed as another patch from Mohit. Not seen in a long time. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 18 06:30:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:30:56 +0000 Subject: [Bugs] [Bug 1635688] Keep only the valid (maintained/supported) components in the build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635688 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE Last Closed|2019-03-25 16:31:10 |2019-05-18 06:30:56 --- Comment #26 from Amar Tumballi --- The major work with this bug is complete, and is available with glusterfs-6.x release. We can use other bugs to track the minor cleanups (if any). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 18 06:31:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:31:20 +0000 Subject: [Bugs] [Bug 1642802] remove 'bd' xlators from core glusterfs builds In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642802 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-18 06:31:20 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 18 06:31:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:31:20 +0000 Subject: [Bugs] [Bug 1635688] Keep only the valid (maintained/supported) components in the build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635688 Bug 1635688 depends on bug 1642802, which changed state. Bug 1642802 Summary: remove 'bd' xlators from core glusterfs builds https://bugzilla.redhat.com/show_bug.cgi?id=1642802 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 18 06:32:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:32:21 +0000 Subject: [Bugs] [Bug 1651322] Incorrect usage of local->fd in afr_open_ftruncate_cbk In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651322 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-5.5 Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-18 06:32:21 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 18 06:33:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:33:03 +0000 Subject: [Bugs] [Bug 1655861] Avoid sending duplicate pmap signout In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1655861 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |NEW CC| |atumball at redhat.com --- Comment #2 from Amar Tumballi --- patch abandoned. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 18 06:34:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:34:04 +0000 Subject: [Bugs] [Bug 1656000] file access problem with encrypt xlator in case one brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1656000 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |NEXTRELEASE Last Closed| |2019-05-18 06:34:04 --- Comment #2 from Amar Tumballi --- The patch is accepted at https://github.com/gluster/glusterfs-xlators/pull/1 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 18 06:34:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:34:56 +0000 Subject: [Bugs] [Bug 1658733] tests/bugs/glusterd/rebalance-operations-in-single-node.t is failing in brick mux regression In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1658733 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com Flags| |needinfo?(srakonde at redhat.c | |om) --- Comment #3 from Amar Tumballi --- I don't see this test failing regression anymore. Did we fix this? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 18 06:35:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:35:23 +0000 Subject: [Bugs] [Bug 1701337] issues with 'building' glusterfs packages if we do 'git clone --depth 1' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701337 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-7.0 Resolution|--- |NEXTRELEASE Last Closed| |2019-05-18 06:35:23 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 18 06:36:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:36:04 +0000 Subject: [Bugs] [Bug 1683352] remove experimental xlators informations from glusterd-volume-set.c In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683352 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed|2019-02-27 03:24:47 |2019-05-18 06:36:04 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 18 06:36:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:36:05 +0000 Subject: [Bugs] [Bug 1683506] remove experimental xlators informations from glusterd-volume-set.c In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683506 Bug 1683506 depends on bug 1683352, which changed state. Bug 1683352 Summary: remove experimental xlators informations from glusterd-volume-set.c https://bugzilla.redhat.com/show_bug.cgi?id=1683352 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 18 06:36:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:36:50 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed|2019-03-29 02:21:43 |2019-05-18 06:36:50 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 18 06:37:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:37:44 +0000 Subject: [Bugs] [Bug 1644164] Use GF_ATOMIC ops to update inode->nlookup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644164 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |CURRENTRELEASE Last Closed|2019-03-25 16:31:39 |2019-05-18 06:37:44 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat May 18 06:38:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:38:22 +0000 Subject: [Bugs] [Bug 1709262] Use GF_ATOMIC ops to update inode->nlookup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709262 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Resolution|--- |WONTFIX Last Closed| |2019-05-18 06:38:22 --- Comment #3 from Amar Tumballi --- Considering the request was for 3.12 branch, which is not maintained anymore, closing it as WONTFIX. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 18 06:38:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 18 May 2019 06:38:41 +0000 Subject: [Bugs] [Bug 1658733] tests/bugs/glusterd/rebalance-operations-in-single-node.t is failing in brick mux regression In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1658733 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(srakonde at redhat.c | |om) | --- Comment #4 from Sanju --- As per my knowledge, we didn't fix it yet. It might be got fixed by some change, that is why we are not seeing this failure in recent brick-mux regressions. Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 20 04:54:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 04:54:54 +0000 Subject: [Bugs] [Bug 1711764] New: Files inaccessible if one rebalance process is killed in a multinode volume Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711764 Bug ID: 1711764 Summary: Files inaccessible if one rebalance process is killed in a multinode volume Product: GlusterFS Version: 4.1 Status: NEW Component: distribute Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: This is a consequence of https://review.gluster.org/#/c/glusterfs/+/17239/ and lookup-optimize being enabled. Rebalance directory processing steps on each node: 1. Set new layout on directory without the commit hash 2. List files on that local subvol. Migrate those files which fall into its bucket. Lookups are performed on the files only if it is determined that it is to be migrated by the process. 3. When done, update the layout on the local subvol with the layout containing the commit hash. When there are multiple rebalance processes processing the same directory, they finish at different times and one process can update the layout with the commit hash before the others are done listing and migrating their files. Clients will therefore see a complete layout even before all files have been looked up according to the new layout causing file access to fail. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Create a 2x2 volume spanning 2 nodes. Create some directories and files on it. 2. Add 2 bricks to convert it to a 3x2 volume. 3. Start a rebalance on the volume and break into one rebalance process before it starts processing the directories. 4. Allow the second rebalance process to complete. Kill the process that is blocked by gdb. 5. Mount the volume and try to stat the files without listing the directories. Actual results: The stat will fail for several files with the error : stat: cannot stat ??: No such file or directory Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 20 04:55:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 04:55:12 +0000 Subject: [Bugs] [Bug 1711764] Files inaccessible if one rebalance process is killed in a multinode volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711764 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Version|4.1 |mainline Assignee|bugs at gluster.org |nbalacha at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 20 05:00:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 05:00:32 +0000 Subject: [Bugs] [Bug 1711764] Files inaccessible if one rebalance process is killed in a multinode volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711764 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 20 05:05:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 05:05:30 +0000 Subject: [Bugs] [Bug 1711764] Files inaccessible if one rebalance process is killed in a multinode volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711764 --- Comment #1 from Nithya Balachandran --- The easiest solution is to have each node do the file lookups before the call to gf_defrag_should_i_migrate. Pros: Simple Cons: Will introduce more lookups but is pretty much the same as the number seen before https://review.gluster.org/#/c/glusterfs/+/17239/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 20 07:56:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 07:56:13 +0000 Subject: [Bugs] [Bug 1711820] New: Typo in cli return string. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711820 Bug ID: 1711820 Summary: Typo in cli return string. Product: GlusterFS Version: 4.1 Status: NEW Component: cli Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Create a replica 2 volume. Add bricks to it to expand the volume without changing the replica count. The cli displays the following: [root at rhgs313-6 src]# gluster volume add-brick vol1 replica 2 192.168.122.6:/bricks/brick1/vol1-3 192.168.122.7:/bricks/brick1/vol1-3 Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avaoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) y "avoid" is spelled incorrectly. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 20 07:56:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 07:56:25 +0000 Subject: [Bugs] [Bug 1711820] Typo in cli return string. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711820 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Version|4.1 |mainline -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 20 08:15:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 08:15:25 +0000 Subject: [Bugs] [Bug 1711827] New: test case bug-1399598-uss-with-ssl.t is generating crash Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711827 Bug ID: 1711827 Summary: test case bug-1399598-uss-with-ssl.t is generating crash Product: GlusterFS Version: mainline Status: NEW Component: rpc Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: ./tests/bugs/snapshot/bug-1399598-uss-with-ssl.t is generating crash. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Run .t in a loop for i in `seq 1 10 do; prove -vf ./tests/bugs/snapshot/bug-1399598-uss-with-ssl.t; done 2. 3. Actual results: Test case is crashing Expected results: Test case should not crash Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 20 08:15:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 08:15:40 +0000 Subject: [Bugs] [Bug 1711827] test case bug-1399598-uss-with-ssl.t is generating crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711827 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 20 08:19:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 08:19:01 +0000 Subject: [Bugs] [Bug 1711827] test case bug-1399598-uss-with-ssl.t is generating crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711827 --- Comment #1 from Mohit Agrawal --- Hi, test case is generating below core dump, As we can see in thread 1 ssl ctx is NULL so ssl api are crashing. ssl ctx is NULL because rpc_clnt_disable is already called by the client and rpc_clnt_disable internally call ssl_teardown_connection to cleanup ssl connection. Thread 17 (Thread 0x7fad4e5f9700 (LWP 28214)): #0 0x00007fad73bdb17f in epoll_wait () from /lib64/libc.so.6 #1 0x00007fad7581451c in event_dispatch_epoll_worker (data=0x7fad44000ca0) at event-epoll.c:751 #2 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 #3 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 Thread 16 (Thread 0x7fad591d7700 (LWP 28209)): #0 0x00007fad7433893a in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007fad757f40b6 in syncenv_task (proc=proc at entry=0x7fad5003a320) at syncop.c:517 #2 0x00007fad757f4e90 in syncenv_processor (thdata=0x7fad5003a320) at syncop.c:584 #3 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 #4 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 Thread 15 (Thread 0x7fad62bc2700 (LWP 28101)): #0 0x00007fad73bd27a7 in select () from /lib64/libc.so.6 #1 0x00007fad7582ac7d in runner (arg=0xb67550) at ../../contrib/timer-wheel/timer-wheel.c:186 #2 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 #3 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 Thread 14 (Thread 0x7fad61319700 (LWP 28112)): #0 0x00007fad743385ec in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007fad4db83b4b in fini (this=) at client.c:2797 #2 0x00007fad757b4acc in xlator_fini_rec (xl=0x7fad400059a0) at xlator.c:667 #3 0x00007fad757b4a5e in xlator_fini_rec (xl=0x7fad40018dc0) at xlator.c:657 #4 0x00007fad757b4a5e in xlator_fini_rec (xl=0x7fad4001aa00) at xlator.c:657 #5 0x00007fad757b4a5e in xlator_fini_rec (xl=0x7fad4001c3d0) at xlator.c:657 #6 0x00007fad757b4a5e in xlator_fini_rec (xl=0x7fad4001dee0) at xlator.c:657 #7 0x00007fad757b4a5e in xlator_fini_rec (xl=0x7fad4001f950) at xlator.c:657 #8 0x00007fad757b4a5e in xlator_fini_rec (xl=0x7fad400213e0) at xlator.c:657 ---Type to continue, or q to quit--- #9 0x00007fad757b4a5e in xlator_fini_rec (xl=0x7fad40023590) at xlator.c:657 #10 0x00007fad757b4a5e in xlator_fini_rec (xl=0x7fad40025040) at xlator.c:657 #11 0x00007fad757b4a5e in xlator_fini_rec (xl=0x7fad40026b10) at xlator.c:657 #12 0x00007fad757b4a5e in xlator_fini_rec (xl=0x7fad400285c0) at xlator.c:657 #13 0x00007fad757b4a5e in xlator_fini_rec (xl=0x7fad4002a0a0) at xlator.c:657 #14 0x00007fad757b4a5e in xlator_fini_rec (xl=0x7fad4002b3f0) at xlator.c:657 #15 0x00007fad757b4a5e in xlator_fini_rec (xl=0x7fad4002d340) at xlator.c:657 #16 0x00007fad757b5c5a in xlator_tree_fini (xl=) at xlator.c:759 #17 0x00007fad757eef72 in glusterfs_graph_deactivate (graph=) at graph.c:427 #18 0x00007fad606dc443 in pub_glfs_fini (fs=0x7fad50001890) at glfs.c:1353 #19 0x00007fad6090ec95 in mgmt_get_snapinfo_cbk (req=, iov=, count=, myframe=) at snapview-server-mgmt.c:410 #20 0x00007fad755620e1 in rpc_clnt_handle_reply (clnt=clnt at entry=0xb8fa20, pollin=pollin at entry=0x7fad5c04db40) at rpc-clnt.c:772 #21 0x00007fad75562493 in rpc_clnt_notify (trans=0xb8fbf0, mydata=0xb8fa50, event=, data=0x7fad5c04db40) at rpc-clnt.c:941 #22 0x00007fad7555f253 in rpc_transport_notify (this=this at entry=0xb8fbf0, event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7fad5c04db40) at rpc-transport.c:549 #23 0x00007fad621a6ef8 in socket_event_poll_in_async (xl=, async=0x7fad5c04dc68) at socket.c:2572 #24 0x00007fad621ae201 in gf_async (cbk=0x7fad621a6ed0 , xl=, async=0x7fad5c04dc68) at ../../../../libglusterfs/src/glusterfs/async.h:189 #25 socket_event_poll_in (notify_handled=true, this=0xb8fbf0) at socket.c:2613 #26 socket_event_handler (fd=fd at entry=10, idx=idx at entry=1, gen=gen at entry=1, data=data at entry=0xb8fbf0, poll_in=, poll_out=, poll_err=, event_thread_died=0 '\000') at socket.c:3004 #27 0x00007fad7581461b in event_dispatch_epoll_handler (event=0x7fad61318024, event_pool=0xb58aa0) at event-epoll.c:648 #28 event_dispatch_epoll_worker (data=0xbb4760) at event-epoll.c:761 #29 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 #30 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 Thread 13 (Thread 0x7fad4f5fb700 (LWP 28212)): ---Type to continue, or q to quit--- #0 0x00007fad74333a2d in __pthread_timedjoin_ex () from /lib64/libpthread.so.0 #1 0x00007fad75813d07 in event_dispatch_epoll (event_pool=0x7fad50035260) at event-epoll.c:846 #2 0x00007fad606da404 in glfs_poller (data=) at glfs.c:727 #3 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 #4 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 Thread 12 (Thread 0x7fad4ffff700 (LWP 28211)): #0 0x00007fad7433c460 in nanosleep () from /lib64/libpthread.so.0 #1 0x00007fad757c6136 in gf_timer_proc (data=0x7fad5003e610) at timer.c:194 #2 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 #3 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 Thread 11 (Thread 0x7fad589d6700 (LWP 28210)): #0 0x00007fad7433893a in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007fad757f40b6 in syncenv_task (proc=proc at entry=0x7fad5003a6e0) at syncop.c:517 #2 0x00007fad757f4e90 in syncenv_processor (thdata=0x7fad5003a6e0) at syncop.c:584 #3 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 #4 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 Thread 10 (Thread 0x7fad5a791700 (LWP 28132)): #0 0x00007fad7433893a in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007fad5b9c4b5d in iot_worker (data=0x7fad5c035bc0) at io-threads.c:197 #2 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 #3 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 Thread 9 (Thread 0x7fad653c7700 (LWP 28096)): #0 0x00007fad7433c460 in nanosleep () from /lib64/libpthread.so.0 #1 0x00007fad757c6136 in gf_timer_proc (data=0xb60c70) at timer.c:194 #2 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 #3 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 ---Type to continue, or q to quit--- Thread 8 (Thread 0x7fad75c9b5c0 (LWP 28095)): #0 0x00007fad74333a2d in __pthread_timedjoin_ex () from /lib64/libpthread.so.0 #1 0x00007fad75813d07 in event_dispatch_epoll (event_pool=0xb58aa0) at event-epoll.c:846 #2 0x00000000004060f8 in main (argc=, argv=) at glusterfsd.c:2917 Thread 7 (Thread 0x7fad5a618700 (LWP 28208)): #0 0x00007fad743385ec in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007fad7555c6e2 in rpcsvc_request_handler (arg=0x7fad6000c080) at rpcsvc.c:2189 #2 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 #3 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 Thread 6 (Thread 0x7fad61b1a700 (LWP 28111)): #0 0x00007fad73bdb17f in epoll_wait () from /lib64/libc.so.6 #1 0x00007fad7581451c in event_dispatch_epoll_worker (data=0xbb45e0) at event-epoll.c:751 #2 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 #3 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 Thread 5 (Thread 0x7fad633c3700 (LWP 28100)): #0 0x00007fad7433893a in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007fad757f40b6 in syncenv_task (proc=proc at entry=0xb63c60) at syncop.c:517 #2 0x00007fad757f4e90 in syncenv_processor (thdata=0xb63c60) at syncop.c:584 #3 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 #4 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 Thread 4 (Thread 0x7fad63bc4700 (LWP 28099)): #0 0x00007fad7433893a in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007fad757f40b6 in syncenv_task (proc=proc at entry=0xb638a0) at syncop.c:517 #2 0x00007fad757f4e90 in syncenv_processor (thdata=0xb638a0) at syncop.c:584 #3 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 ---Type to continue, or q to quit--- #4 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 Thread 3 (Thread 0x7fad643c5700 (LWP 28098)): #0 0x00007fad73ba76b0 in nanosleep () from /lib64/libc.so.6 #1 0x00007fad73ba758a in sleep () from /lib64/libc.so.6 #2 0x00007fad757e0c45 in pool_sweeper (arg=) at mem-pool.c:446 #3 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 #4 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 Thread 2 (Thread 0x7fad64bc6700 (LWP 28097)): #0 0x00007fad73b18c8c in sigtimedwait () from /lib64/libc.so.6 #1 0x00007fad7433cc5c in sigwait () from /lib64/libpthread.so.0 #2 0x0000000000406667 in glusterfs_sigwaiter (arg=) at glusterfsd.c:2414 #3 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 #4 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 Thread 1 (Thread 0x7fad4edfa700 (LWP 28213)): #0 0x00007fad73f3c654 in BIO_test_flags () from /lib64/libcrypto.so.1.1 #1 0x00007fad73f3d0d6 in BIO_copy_next_retry () from /lib64/libcrypto.so.1.1 #2 0x00007fad73f3a9fb in buffer_ctrl () from /lib64/libcrypto.so.1.1 #3 0x00007fad61f5c6b2 in ssl3_dispatch_alert () from /lib64/libssl.so.1.1 #4 0x00007fad61f700fc in ossl_statem_client_read_transition () from /lib64/libssl.so.1.1 #5 0x00007fad61f6ef59 in state_machine () from /lib64/libssl.so.1.1 #6 0x00007fad61f67135 in SSL_do_handshake () from /lib64/libssl.so.1.1 #7 0x00007fad621ac398 in ssl_complete_connection (this=this at entry=0x7fad40032c00) at socket.c:485 #8 0x00007fad621ac9ed in ssl_handle_client_connection_attempt (this=0x7fad40032c00) at socket.c:2812 #9 socket_complete_connection (this=0x7fad40032c00) at socket.c:2911 #10 socket_event_handler (fd=fd at entry=17, idx=idx at entry=1, gen=gen at entry=4, data=data at entry=0x7fad40032c00, poll_in=0, poll_out=4, poll_err=0, event_thread_died=0 '\000') at socket.c:2973 #11 0x00007fad7581461b in event_dispatch_epoll_handler (event=0x7fad4edf9024, event_pool=0x7fad50035260) at event-epoll.c:648 ---Type to continue, or q to quit--- #12 event_dispatch_epoll_worker (data=0x7fad44000b20) at event-epoll.c:761 #13 0x00007fad74332594 in start_thread () from /lib64/libpthread.so.0 #14 0x00007fad73bdae5f in clone () from /lib64/libc.so.6 (gdb) p priv->ssl_ssl $3 = (SSL *) 0x0 Thanks, Mohit Agrawal -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 20 09:01:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 09:01:40 +0000 Subject: [Bugs] [Bug 1711827] test case bug-1399598-uss-with-ssl.t is generating crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711827 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22745 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 20 09:01:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 09:01:41 +0000 Subject: [Bugs] [Bug 1711827] test case bug-1399598-uss-with-ssl.t is generating crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711827 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22745 (rpc: test case bug-1399598-uss-with-ssl.t is generating crash) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 20 10:01:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 10:01:19 +0000 Subject: [Bugs] [Bug 1711764] Files inaccessible if one rebalance process is killed in a multinode volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711764 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22746 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 20 10:01:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 10:01:20 +0000 Subject: [Bugs] [Bug 1711764] Files inaccessible if one rebalance process is killed in a multinode volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711764 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22746 (cluster/dht: Lookup all files when processing directory) posted (#1) for review on master by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 20 11:04:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 11:04:17 +0000 Subject: [Bugs] [Bug 1670382] parallel-readdir prevents directories and files listing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670382 --- Comment #10 from Marcin --- It seems that in the new version of glusterfs (6.1) the problem no longer occurs, but I haven't seen it on the bug list to this version. Can anyone confirm this, please? Cheers -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 20 13:07:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 13:07:35 +0000 Subject: [Bugs] [Bug 1711945] New: create account on download.gluster.org Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711945 Bug ID: 1711945 Summary: create account on download.gluster.org Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: spamecha at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description - To upload the packages public key - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDaSvn31f+1My0S9aWAvjIWPrVOiENmWrM62CEF/wzBvAnMRuRh0qaGrcrJ1ZKz9+yjetwX7/obuynjjUTd0f/245Jc+f06E66jUJHKGjtk8bwfa0JMUzGYrFVyNFXMPqewRvcHZoFnjZF3xOIbCqTy4H9CZfJZszc83+FLoITPir3HNMJo0ATrSe9XHBRJHne6el+zxfaGQMEe4M5p76oWJORsvYkGjqAEnQSRTbdF9e51VvLz3ME3pdWiPviWF4TIkXolAjD7A2Jm9KK06t9SiOIP9AuVS9llVyf8gOZrwP+IR5gbZeiL5+9G+xWQTi7Pw5anAfJY1Mbe2l31yAen root at server1 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 20 13:10:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 13:10:26 +0000 Subject: [Bugs] [Bug 1711945] create account on download.gluster.org In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711945 spamecha at redhat.com changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |dkhandel at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 20 13:25:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 13:25:51 +0000 Subject: [Bugs] [Bug 1711950] New: Account in download.gluster.org to upload upload the build packages Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711950 Bug ID: 1711950 Summary: Account in download.gluster.org to upload upload the build packages Product: GlusterFS Version: 4.1 Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: sacharya at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: Need an account in download.gluster.org to upload upload the build packages. rsa-public key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDXjiqCChrq8B/cJaRx9W3YQdtEo60dGwxuILdtw4xvQQz2/NwPKNAeOZ/1McfLv8zzuJa2Jm8mBzVk3Cc1NO0lRy3hUUphSHGrGe7BjL2WysXk4pYNYrNIza1X6EXjEDphvfRw7FU3DKVMIisOPnOgWW0xGT8Wb5XVfIfQzpW3ZJJX/aR2Nsjas2Dwxbf9hMfPHRNz5OQmNtpbqmkrcr/PC+9t7B5JJ+kdTe8x920/+7EaCTuAIOsin8fPxK4XoynA6BBuZu7B0rZbOm4DfL59loE2304epXbhvJkaTrNnkZOoQJRn4ruLDGq4F5jzCrOZyOH86TmExOz2rJdZC/wP root at vm1 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 20 13:27:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 13:27:24 +0000 Subject: [Bugs] [Bug 1711950] Account in download.gluster.org to upload the build packages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711950 Shwetha K Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |dkhandel at redhat.com Summary|Account in |Account in |download.gluster.org to |download.gluster.org to |upload upload the build |upload the build packages |packages | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 20 13:47:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 13:47:51 +0000 Subject: [Bugs] [Bug 1670382] parallel-readdir prevents directories and files listing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670382 --- Comment #11 from joao.bauto at neuro.fchampalimaud.org --- I have been running with parallel-readdir on for the past 3 weeks with no issues. No info in change notes... -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 20 20:01:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 20:01:41 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #68 from Amgad --- The op-version doesn't change with upgrade. So if I upgrade from 3.12.15 to 5.5, it stays the same [root at gfs2 ansible]# gluster volume get all cluster.op-version Option Value ------ ----- cluster.op-version 31202 [root at gfs2 ansible]# gluster --version glusterfs 5.5 ...... So when I rollback is't the lower op-version. I don't change op-version version upgrade till everything is fine (soak), then I change it to the higher value BTW -- I tested the scenario with 6.1-1 and it's still the same! Regards, Amgad -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 20 20:03:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 20:03:51 +0000 Subject: [Bugs] [Bug 1701203] GlusterFS 6.2 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701203 Amgad changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amgad.saleh at nokia.com Depends On| |1687051 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 20 20:03:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 20 May 2019 20:03:51 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Amgad changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1701203 (glusterfs-6.2) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1701203 [Bug 1701203] GlusterFS 6.2 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 01:40:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 01:40:04 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22750 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 01:40:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 01:40:04 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #663 from Worker Ant --- REVIEW: https://review.gluster.org/22750 (Revert \"rpc: implement reconnect back-off strategy\") posted (#2) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 02:56:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 02:56:53 +0000 Subject: [Bugs] [Bug 1711827] test case bug-1399598-uss-with-ssl.t is generating crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711827 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 03:55:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 03:55:15 +0000 Subject: [Bugs] [Bug 1186286] Geo-Replication Faulty state In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1186286 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |WORKSFORME Last Closed| |2019-05-21 03:55:15 --- Comment #5 from Kotresh HR --- The issue is no longer seen in latest releases, hence closing the issue. Please re-open the issue if it happens again and upload geo-rep logs. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 04:43:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 04:43:17 +0000 Subject: [Bugs] [Bug 1657645] [Glusterfs-server-5.1] Gluster storage domain creation fails on MountError In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657645 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(sabose at redhat.com | |) --- Comment #11 from Amar Tumballi --- Did the newer releases of glusterfs work? We have fixed some issues with glusterfs 5.x series, and made 5.6 release. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 04:43:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 04:43:40 +0000 Subject: [Bugs] [Bug 1657645] [Glusterfs-server-5.1] Gluster storage domain creation fails on MountError In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657645 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(ebenahar at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 04:55:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 04:55:38 +0000 Subject: [Bugs] [Bug 1138541] Geo-replication configuration should have help option. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1138541 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|khiremat at redhat.com |sacharya at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 04:57:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 04:57:10 +0000 Subject: [Bugs] [Bug 1712220] New: tests/geo-rep: arequal checksum comparison always succeeds Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712220 Bug ID: 1712220 Summary: tests/geo-rep: arequal checksum comparison always succeeds Product: GlusterFS Version: 6 Status: NEW Component: tests Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community This bug was initially created as a copy of Bug #1707742 I am copying this bug because: Description of problem: arequal checksum comparison always succeeds in all geo-rep test cases. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: Run any geo-rep test case with bash -x bash -x tests/00-geo-rep/georep-basic-dr-tarssh.t + _EXPECT_WITHIN 109 120 0 arequal_checksum /mnt/glusterfs/0 /mnt/glusterfs/1 + TESTLINE=109 .. .. + dbg 'TEST 35 (line 109): 0 arequal_checksum /mnt/glusterfs/0 /mnt/glusterfs/1' + '[' x0 = x0 ']' + saved_cmd='0 arequal_checksum /mnt/glusterfs/0 /mnt/glusterfs/1' + e=0 + a= + shift ++ date +%s + local endtime=1557300038 + EW_RETRIES=0 ++ date +%s + '[' 1557299918 -lt 1557300038 ']' ++ tail -1 ++ arequal_checksum /mnt/glusterfs/0 /mnt/glusterfs/1 ++ master=/mnt/glusterfs/0 ++ slave=/mnt/glusterfs/1 ++ wc -l ++ diff /dev/fd/63 /dev/fd/62 +++ arequal-checksum -p /mnt/glusterfs/1 +++ arequal-checksum -p /mnt/glusterfs/0 ++ exit 0 + a=20 + '[' 0 -ne 0 ']' + [[ 20 =~ 0 ]] <<<< Even though it's not equal it went to break in first iteration + break Actual results: arequal exists on first call even if it's not matched. Expected results: arequal should exists only on successful match Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 04:57:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 04:57:28 +0000 Subject: [Bugs] [Bug 1712220] tests/geo-rep: arequal checksum comparison always succeeds In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712220 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 05:00:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 05:00:07 +0000 Subject: [Bugs] [Bug 1712220] tests/geo-rep: arequal checksum comparison always succeeds In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712220 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22751 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 05:00:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 05:00:08 +0000 Subject: [Bugs] [Bug 1712220] tests/geo-rep: arequal checksum comparison always succeeds In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712220 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22751 (tests/geo-rep: Fix arequal checksum comparison) posted (#1) for review on release-6 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 05:12:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 05:12:44 +0000 Subject: [Bugs] [Bug 1712223] New: geo-rep: With heavy rename workload geo-rep log if flooded Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712223 Bug ID: 1712223 Summary: geo-rep: With heavy rename workload geo-rep log if flooded Product: GlusterFS Version: 6 Status: NEW Component: geo-replication Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community This bug was initially created as a copy of Bug #1709653 I am copying this bug because: Description of problem: With heavy rename workload as mentioned in bug 1694820, geo-rep log is flooded with gfid conflict resolution logs. All the entries to be fixed are logged at INFO level. Version-Release number of selected component (if applicable): mainline How reproducible: Always Steps to Reproduce: 1. Setup geo-rep and run the reproducer given in bug 1694820 Actual results: Geo-rep log is flooded Expected results: Geo-rep log should not be flooded. Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 05:13:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 05:13:05 +0000 Subject: [Bugs] [Bug 1712223] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712223 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 05:17:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 05:17:30 +0000 Subject: [Bugs] [Bug 1712223] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712223 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22752 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 05:17:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 05:17:31 +0000 Subject: [Bugs] [Bug 1712223] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712223 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22752 (geo-rep: Convert gfid conflict resolutiong logs into debug) posted (#1) for review on release-6 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 05:45:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 05:45:24 +0000 Subject: [Bugs] [Bug 1711827] test case bug-1399598-uss-with-ssl.t is generating crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711827 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |POST -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 05:59:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 05:59:04 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22753 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 05:59:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 05:59:05 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #38 from Worker Ant --- REVIEW: https://review.gluster.org/22753 (tests/quick-read-with-upcall.t: increase the timeout) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 06:07:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 06:07:18 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22754 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 06:07:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 06:07:19 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #664 from Worker Ant --- REVIEW: https://review.gluster.org/22754 (tests: Fix spurious failures in ta-write-on-bad-brick.t) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 06:07:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 06:07:19 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #665 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22750 (Revert \"rpc: implement reconnect back-off strategy\") posted (#4) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 06:39:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 06:39:20 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22750 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 06:39:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 06:39:21 +0000 Subject: [Bugs] [Bug 1711827] test case bug-1399598-uss-with-ssl.t is generating crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711827 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22750 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 06:39:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 06:39:23 +0000 Subject: [Bugs] [Bug 1711827] test case bug-1399598-uss-with-ssl.t is generating crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711827 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22750 (Revert \"rpc: implement reconnect back-off strategy\") posted (#4) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 06:39:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 06:39:23 +0000 Subject: [Bugs] [Bug 1711827] test case bug-1399598-uss-with-ssl.t is generating crash In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711827 --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22750 (Revert \"rpc: implement reconnect back-off strategy\") merged (#5) on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 10:07:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 10:07:51 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22755 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 10:07:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 10:07:52 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #666 from Worker Ant --- REVIEW: https://review.gluster.org/22755 (tests: Add history api tests) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 10:49:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 10:49:07 +0000 Subject: [Bugs] [Bug 1712322] New: Brick logs inundated with [2019-04-27 22:14:53.378047] I [dict.c:541:dict_get] (-->/usr/lib64/glusterfs/6.0/xlator/features/worm.so(+0x7241) [0x7fe857bb3241] -->/usr/lib64/glusterfs/6.0/xlator/features/locks.so(+0x1c219) [0x7fe857dda219] [Invalid argumen Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712322 Bug ID: 1712322 Summary: Brick logs inundated with [2019-04-27 22:14:53.378047] I [dict.c:541:dict_get] (-->/usr/lib64/glusterfs/6.0/xlator/features/worm.so(+ 0x7241) [0x7fe857bb3241] -->/usr/lib64/glusterfs/6.0/xlator/features/locks.so(+ 0x1c219) [0x7fe857dda219] [Invalid argumen Product: GlusterFS Version: mainline Status: NEW Component: locks Severity: high Priority: high Assignee: spalai at redhat.com Reporter: spalai at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, ksubrahm at redhat.com, moagrawa at redhat.com, nchilaka at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, sheggodu at redhat.com, storage-qa-internal at redhat.com Depends On: 1704181 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1704181 [Bug 1704181] Brick logs inundated with [2019-04-27 22:14:53.378047] I [dict.c:541:dict_get] (-->/usr/lib64/glusterfs/6.0/xlator/features/worm.so(+0x7241) [0x7fe857bb3241] -->/usr/lib64/glusterfs/6.0/xlator/features/locks.so(+0x1c219) [0x7fe857dda219] [Invalid argumen -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 10:51:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 10:51:29 +0000 Subject: [Bugs] [Bug 1712322] Brick logs inundated with [2019-04-27 22:14:53.378047] I [dict.c:541:dict_get] (-->/usr/lib64/glusterfs/6.0/xlator/features/worm.so(+0x7241) [0x7fe857bb3241] -->/usr/lib64/glusterfs/6.0/xlator/features/locks.so(+0x1c219) [0x7fe857dda219] [Invalid argumen In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712322 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22756 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 10:51:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 10:51:30 +0000 Subject: [Bugs] [Bug 1712322] Brick logs inundated with [2019-04-27 22:14:53.378047] I [dict.c:541:dict_get] (-->/usr/lib64/glusterfs/6.0/xlator/features/worm.so(+0x7241) [0x7fe857bb3241] -->/usr/lib64/glusterfs/6.0/xlator/features/locks.so(+0x1c219) [0x7fe857dda219] [Invalid argumen In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712322 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22756 (lock: check null value of dict to avoid log flooding) posted (#1) for review on master by Susant Palai -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 11:23:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 11:23:12 +0000 Subject: [Bugs] [Bug 1711820] Typo in cli return string. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711820 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22757 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 11:23:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 11:23:13 +0000 Subject: [Bugs] [Bug 1711820] Typo in cli return string. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711820 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22757 (cli: Fixed typos) posted (#1) for review on master by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 11:37:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 11:37:12 +0000 Subject: [Bugs] [Bug 1708926] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708926 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-21 11:37:12 --- Comment #6 from Worker Ant --- REVIEW: https://review.gluster.org/22743 (afr/frame: Destroy frame after afr_selfheal_entry_granular) merged (#3) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 11:42:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 11:42:14 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #39 from Worker Ant --- REVIEW: https://review.gluster.org/22753 (tests/quick-read-with-upcall.t: increase the timeout) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 21 12:53:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 12:53:27 +0000 Subject: [Bugs] [Bug 1703948] Self-heal daemon resources are not cleaned properly after a ec fini In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 12:59:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 12:59:35 +0000 Subject: [Bugs] [Bug 1703948] Self-heal daemon resources are not cleaned properly after a ec fini In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22758 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 21 12:59:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 21 May 2019 12:59:36 +0000 Subject: [Bugs] [Bug 1703948] Self-heal daemon resources are not cleaned properly after a ec fini In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22758 (ec/fini: Fix race with ec_fini and ec_notify) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 22 05:23:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 05:23:55 +0000 Subject: [Bugs] [Bug 1712668] New: Remove-brick shows warning cluster.force-migration enabled where as cluster.force-migration is disabled on the volume Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712668 Bug ID: 1712668 Summary: Remove-brick shows warning cluster.force-migration enabled where as cluster.force-migration is disabled on the volume Product: GlusterFS Version: mainline Status: NEW Component: cli Keywords: Regression Severity: medium Assignee: bugs at gluster.org Reporter: sacharya at redhat.com CC: bugs at gluster.org, nchilaka at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, sheggodu at redhat.com, storage-qa-internal at redhat.com, ubansal at redhat.com Depends On: 1708183 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1708183 [Bug 1708183] Remove-brick shows warning cluster.force-migration enabled where as cluster.force-migration is disabled on the volume -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 22 06:00:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 06:00:41 +0000 Subject: [Bugs] [Bug 1711950] Account in download.gluster.org to upload the build packages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711950 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com Flags| |needinfo? | |needinfo?(mscherer at redhat.c | |om) --- Comment #1 from Deepshikha khandelwal --- Misc, can you please take a look at it. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 22 06:21:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 06:21:36 +0000 Subject: [Bugs] [Bug 1711945] create account on download.gluster.org In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711945 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com Flags| |needinfo?(mscherer at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 22 06:59:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 06:59:38 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22760 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 22 06:59:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 06:59:39 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #667 from Worker Ant --- REVIEW: https://review.gluster.org/22760 (tests: Add changelog api tests) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 22 07:44:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 07:44:25 +0000 Subject: [Bugs] [Bug 1712741] New: glusterd_svcs_stop should call individual wrapper function to stop rather than calling the glusterd_svc_stop Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712741 Bug ID: 1712741 Summary: glusterd_svcs_stop should call individual wrapper function to stop rather than calling the glusterd_svc_stop Product: GlusterFS Version: mainline Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: rkavunga at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: glusterd_svcs_stop should call individual wrapper function to stop a daemon rather than calling glusterd_svc_stop. For example for shd, it should call glusterd_shdsvc_stop instead of calling basic API function to stop. Because the individual functions for each daemon could be doing some specific operation in their wrapper function Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. replace-brick and reset-brick operations are using this 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 22 07:47:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 07:47:55 +0000 Subject: [Bugs] [Bug 1712741] glusterd_svcs_stop should call individual wrapper function to stop rather than calling the glusterd_svc_stop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712741 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |rkavunga at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 22 08:12:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 08:12:25 +0000 Subject: [Bugs] [Bug 1712741] glusterd_svcs_stop should call individual wrapper function to stop rather than calling the glusterd_svc_stop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712741 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22761 (glusterd/svc: glusterd_svcs_stop should call individual wrapper function) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 22 08:12:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 08:12:24 +0000 Subject: [Bugs] [Bug 1712741] glusterd_svcs_stop should call individual wrapper function to stop rather than calling the glusterd_svc_stop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712741 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22761 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 22 10:47:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 10:47:52 +0000 Subject: [Bugs] [Bug 1708926] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708926 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Flags|needinfo?(rkavunga at redhat.c | |om) | Keywords| |Reopened -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 22 13:11:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 13:11:42 +0000 Subject: [Bugs] [Bug 1703948] Self-heal daemon resources are not cleaned properly after a ec fini In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703948 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-05-13 06:18:49 |2019-05-22 13:11:42 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22758 (ec/fini: Fix race with ec_fini and ec_notify) merged (#1) on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 22 14:38:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 14:38:46 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22766 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 22 14:38:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 14:38:48 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #668 from Worker Ant --- REVIEW: https://review.gluster.org/22766 ([WIP]dict.c: remove one strlen() done under lock.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 22 15:53:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 15:53:18 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22767 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Wed May 22 15:53:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 15:53:18 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1624 from Worker Ant --- REVIEW: https://review.gluster.org/22767 (Fix some \"Null pointer dereference\" coverity issues) posted (#1) for review on master by Xavi Hernandez -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 05:14:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 05:14:16 +0000 Subject: [Bugs] [Bug 1712322] Brick logs inundated with [2019-04-27 22:14:53.378047] I [dict.c:541:dict_get] (-->/usr/lib64/glusterfs/6.0/xlator/features/worm.so(+0x7241) [0x7fe857bb3241] -->/usr/lib64/glusterfs/6.0/xlator/features/locks.so(+0x1c219) [0x7fe857dda219] [Invalid argumen In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712322 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22756 (lock: check null value of dict to avoid log flooding) merged (#4) on master by Krutika Dhananjay -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 23 08:49:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 08:49:54 +0000 Subject: [Bugs] [Bug 1711240] [GNFS] gf_nfs_mt_inode_ctx serious memory leak In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711240 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-23 08:49:54 --- Comment #6 from Worker Ant --- REVIEW: https://review.gluster.org/22738 (inode: fix wrong loop count in __inode_ctx_free) merged (#4) on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 09:18:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 09:18:08 +0000 Subject: [Bugs] [Bug 1711820] Typo in cli return string. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711820 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-23 09:18:08 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22757 (cli: Fixed typos) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 09:38:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 09:38:02 +0000 Subject: [Bugs] [Bug 1713260] New: Using abrt-action-analyze-c on core dumps on CI Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713260 Bug ID: 1713260 Summary: Using abrt-action-analyze-c on core dumps on CI Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Severity: medium Priority: medium Assignee: bugs at gluster.org Reporter: sankarshan at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 09:56:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 09:56:44 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #669 from Worker Ant --- REVIEW: https://review.gluster.org/22675 (glusterd-utils.c: skip checksum when possible.) merged (#6) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 10:14:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 10:14:05 +0000 Subject: [Bugs] [Bug 1602824] SMBD crashes when streams_attr VFS is used with Gluster VFS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1602824 --- Comment #4 from ryan at magenta.tv --- I've carried out some more testing with Samba 4.9.8 and two the different Gluster VFS plugins and have found the standard 'glusterfs' causes and issue when used in conjunction with fruit. SMB.conf below: ------------------------------------------------------------------------------------------------------------ [global] security = ADS workgroup = MAGENTA realm = MAGENTA.LOCAL netbios name = DEVCLUSTER01 max protocol = SMB3 min protocol = SMB2 ea support = yes clustering = yes server signing = no max log size = 10000 glusterfs:loglevel = 5 log file = /var/log/samba/log-%M.smbd logging = file log level = 3 template shell = /sbin/nologin winbind offline logon = false winbind refresh tickets = yes winbind enum users = Yes winbind enum groups = Yes allow trusted domains = yes passdb backend = tdbsam idmap cache time = 604800 idmap negative cache time = 300 winbind cache time = 604800 idmap config magenta:backend = rid idmap config magenta:range = 10000-999999 idmap config * : backend = tdb idmap config * : range = 3000-7999 guest account = nobody map to guest = bad user force directory mode = 0777 force create mode = 0777 create mask = 0777 directory mask = 0777 hide unreadable = no store dos attributes = no unix extensions = no load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes glusterfs:volfile_server = localhost kernel share modes = No strict locking = auto oplocks = yes durable handles = yes kernel oplocks = no posix locking = no level2 oplocks = no readdir_attr:aapl_rsize = yes readdir_attr:aapl_finder_info = no readdir_attr:aapl_max_access = no [VFS_Apple_Tests] read only = no guest ok = yes vfs objects = catia fruit streams_xattr glusterfs glusterfs:volume = mcv01 path = "/data" recycle:keeptree = yes recycle:directory_mode = 0770 recycle:versions = yes recycle:repository = .recycle recycle:subdir_mode = 0777 valid users = "nobody" @"audio" "MAGENTA\r.launchbury" [FUSE_Apple_Tests] read only = no guest ok = yes vfs objects = catia fruit streams_xattr glusterfs_fuse path = "/mnt/mcv01/data" recycle:repository = .recycle recycle:keeptree = yes recycle:versions = yes recycle:directory_mode = 0770 recycle:subdir_mode = 0777 ------------------------------------------------------------------------------------------------------------ Testing: Using Finder and OS X 10.14.4 - Connect to each share - Try to create a folder in sub-folder of each share Result: - Share 'VFS_Apple_Tests' produced error -50 - Share 'FUSE_Apple_Tests' worked as expected Debug level 10 logs attached -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 10:16:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 10:16:14 +0000 Subject: [Bugs] [Bug 1602824] SMBD crashes when streams_attr VFS is used with Gluster VFS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1602824 --- Comment #5 from ryan at magenta.tv --- Created attachment 1572423 --> https://bugzilla.redhat.com/attachment.cgi?id=1572423&action=edit Debug level 10 log of working test -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 10:16:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 10:16:42 +0000 Subject: [Bugs] [Bug 1602824] SMBD crashes when streams_attr VFS is used with Gluster VFS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1602824 --- Comment #6 from ryan at magenta.tv --- Created attachment 1572424 --> https://bugzilla.redhat.com/attachment.cgi?id=1572424&action=edit Debug level 10 log of failed test01 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 10:30:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 10:30:37 +0000 Subject: [Bugs] [Bug 1713284] New: ./tests/basic/gfapi/gfapi-ssl-test.t is failing too often in regression Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713284 Bug ID: 1713284 Summary: ./tests/basic/gfapi/gfapi-ssl-test.t is failing too often in regression Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: srakonde at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: ./tests/basic/gfapi/gfapi-ssl-test.t is failing too often in regression jobs. logs can be found at https://build.gluster.org/job/centos7-regression/6148/ Version-Release number of selected component (if applicable): mainline How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 11:04:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 11:04:48 +0000 Subject: [Bugs] [Bug 1713284] ./tests/basic/gfapi/gfapi-ssl-test.t is failing too often in regression In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713284 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |NOTABUG Last Closed| |2019-05-23 11:04:48 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 11:19:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 11:19:36 +0000 Subject: [Bugs] [Bug 1713307] New: ganesh-nfs didn't failback when writing files on Mac nfs client if the power is shut down Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713307 Bug ID: 1713307 Summary: ganesh-nfs didn't failback when writing files on Mac nfs client if the power is shut down Product: GlusterFS Version: 4.1 Hardware: x86_64 OS: Linux Status: NEW Component: ganesha-nfs Severity: medium Assignee: bugs at gluster.org Reporter: guol-fnst at cn.fujitsu.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: We did some failover/failback tests on 3 nodes(Node-1 Node-2 Node-3). The software architecture is "glusterfs +ctdb(public address) + nfs-ganesha". Gluster volume type is replica 3. We used CTDB's floating ip to mount the volume on Mac OS via nfs from Node-1, and wrote file A a to the mountpoint. When the file A was copied to the mountpoint, the power of Node-1 is shut down. The coping process was suspended, however we can copy other files to the mountpoint normally. 20 minutes later, everything became OK, File A resumed being copied. Windows NFS client has the ame behaviors with Mac. But Centos NFS client works very well ,and shows no suspending. Version-Release number of selected component (if applicable): gluster version: 4.1.8 nfs-ganesha version: 2.7.3 Mac client(10.14.0) How reproducible: Steps to Reproduce: 1.create a gluster volume (replica 3), and export it with CTDB+ganesha-nfs 2.Mount the vol on Mac os or Windows via CTDB floating IP.Copy a file to the mountpiont. 3.Shut down the power of the node where the floating IP exists. Actual results: The coping process was suspended, however we can copy other files to the mountpoint normally. 20 minutes later, everything became OK, File A resumed being copied. No matter how many times we try, We must wait for 20 minutes. Expected results: File A can be transferrd in 1 or 2 minutes. Additional info: Here is the ganesha log of Node-2 when the floating ip transferred to Node-2. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 11:29:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 11:29:31 +0000 Subject: [Bugs] [Bug 1712220] tests/geo-rep: arequal checksum comparison always succeeds In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712220 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-23 11:29:31 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22751 (tests/geo-rep: Fix arequal checksum comparison) merged (#2) on release-6 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 23 11:29:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 11:29:54 +0000 Subject: [Bugs] [Bug 1709738] geo-rep: Sync hangs with tarssh as sync-engine In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709738 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-23 11:29:54 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22725 (geo-rep: Fix sync hang with tarssh) merged (#3) on release-6 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 11:30:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 11:30:19 +0000 Subject: [Bugs] [Bug 1712223] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712223 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-23 11:30:19 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22752 (geo-rep: Convert gfid conflict resolutiong logs into debug) merged (#1) on release-6 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 11:48:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 11:48:33 +0000 Subject: [Bugs] [Bug 1701203] GlusterFS 6.2 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701203 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22768 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 11:48:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 11:48:35 +0000 Subject: [Bugs] [Bug 1701203] GlusterFS 6.2 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701203 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22768 (doc: Added release notes for 6.2) posted (#1) for review on release-6 by hari gowtham -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 13:53:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 13:53:07 +0000 Subject: [Bugs] [Bug 1701203] GlusterFS 6.2 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1701203 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-23 13:53:07 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22768 (doc: Added release notes for 6.2) merged (#2) on release-6 by hari gowtham -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 14:28:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 14:28:25 +0000 Subject: [Bugs] [Bug 1713391] New: Access to wordpress instance of gluster.org required for release management Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713391 Bug ID: 1713391 Summary: Access to wordpress instance of gluster.org required for release management Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Severity: medium Assignee: bugs at gluster.org Reporter: rkothiya at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: As I am managing the release of glusterfs, I need access to gluster.org workpress instance, to publish the release schedule and do other activities related to release. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 14:31:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 14:31:31 +0000 Subject: [Bugs] [Bug 1713391] Access to wordpress instance of gluster.org required for release management In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713391 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com, | |srangana at redhat.com Flags| |needinfo?(atumball at redhat.c | |om) --- Comment #1 from Shyamsundar --- Approved by me, adding Amar for another ack! -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 14:33:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 14:33:55 +0000 Subject: [Bugs] [Bug 1708257] Grant additional maintainers merge rights on release branches In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708257 --- Comment #4 from Shyamsundar --- Deepshika/Misc, can we get this done, else their ability to manage releases is not feasible. Thanks! Pint Rinku for your github details as well. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 15:26:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 15:26:09 +0000 Subject: [Bugs] [Bug 1652887] Geo-rep help looks to have a typo. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1652887 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-03-25 16:32:14 |2019-05-23 15:26:09 --- Comment #7 from Worker Ant --- REVIEW: https://review.gluster.org/22689 (geo-rep: Geo-rep help text issue) merged (#3) on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 23 16:05:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 16:05:36 +0000 Subject: [Bugs] [Bug 1713429] New: My personal blog contenting is not feeding to https://planet.gluster.org/ Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713429 Bug ID: 1713429 Summary: My personal blog contenting is not feeding to https://planet.gluster.org/ Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: rkavunga at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: My personal content having the tag glusterfs should have been available in the https://planet.gluster.org/ after merging https://github.com/gluster/planet-gluster/pull/47. The content is not feeding to to the site. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 18:19:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 18:19:31 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1625 from Worker Ant --- REVIEW: https://review.gluster.org/22737 (glusterd: coverity fix) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 19:23:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 19:23:40 +0000 Subject: [Bugs] [Bug 1708257] Grant additional maintainers merge rights on release branches In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708257 Rinku changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(rkothiya at redhat.c | |om) | --- Comment #5 from Rinku --- Hi Rinku's Username: rkothiya and 2FA is enabled for this account on github. Regards Rinku -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 03:46:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 03:46:39 +0000 Subject: [Bugs] [Bug 1706603] Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706603 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-05-06 00:01:57 |2019-05-24 03:46:39 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22674 (tests: Test openfd heal doesn't truncate files) merged (#8) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 03:46:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 03:46:41 +0000 Subject: [Bugs] [Bug 1709660] Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709660 Bug 1709660 depends on bug 1706603, which changed state. Bug 1706603 Summary: Glusterfsd crashing in ec-inode-write.c, in GF_ASSERT https://bugzilla.redhat.com/show_bug.cgi?id=1706603 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 05:06:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 05:06:46 +0000 Subject: [Bugs] [Bug 1708257] Grant additional maintainers merge rights on release branches In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708257 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dkhandel at redhat.com --- Comment #6 from Deepshikha khandelwal --- Done. You all have now merge rights on the given branches. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 05:07:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 05:07:23 +0000 Subject: [Bugs] [Bug 1708257] Grant additional maintainers merge rights on release branches In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708257 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-24 05:07:23 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 09:14:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 09:14:24 +0000 Subject: [Bugs] [Bug 1713613] New: rebalance start command doesn't throw up error message if the command fails Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713613 Bug ID: 1713613 Summary: rebalance start command doesn't throw up error message if the command fails Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: glusterd Assignee: amukherj at redhat.com Reporter: srakonde at redhat.com QA Contact: bmekala at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, pasik at iki.fi, rhs-bugs at redhat.com, sankarshan at redhat.com, srakonde at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1683526, 1699176 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1699176 +++ +++ This bug was initially created as a clone of Bug #1683526 +++ Description of problem: When a rebalance start command fails, it doesn't throw up the error message back to CLI. Version-Release number of selected component (if applicable): release-6 How reproducible: Always Steps to Reproduce: 1. Create 1 X 1 volume, trigger rebalance start. Command fails as glusterd.log complains about following [2019-02-27 06:29:15.448303] E [MSGID: 106218] [glusterd-rebalance.c:462:glusterd_rebalance_cmd_validate] 0-glusterd: Volume test-vol5 is not a distribute type or contains only 1 brick But CLI doesn't throw up any error messages. Actual results: CLI doesn't throw up an error message. Expected results: CLI should throw up an error message. Additional info: --- Additional comment from Worker Ant on 2019-04-11 18:20:42 IST --- REVIEW: https://review.gluster.org/22547 (glusterd: display an error when rebalance start is failed) posted (#1) for review on master by Sanju Rakonde --- Additional comment from Worker Ant on 2019-04-12 07:45:01 IST --- REVIEW: https://review.gluster.org/22547 (glusterd: display an error when rebalance start is failed) posted (#2) for review on master by Sanju Rakonde --- Additional comment from Worker Ant on 2019-04-12 09:21:03 IST --- REVIEW: https://review.gluster.org/22547 (glusterd: display an error when rebalance start is failed) merged (#3) on master by Atin Mukherjee Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1683526 [Bug 1683526] rebalance start command doesn't throw up error message if the command fails https://bugzilla.redhat.com/show_bug.cgi?id=1699176 [Bug 1699176] rebalance start command doesn't throw up error message if the command fails -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 24 09:14:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 09:14:24 +0000 Subject: [Bugs] [Bug 1683526] rebalance start command doesn't throw up error message if the command fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683526 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1713613 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1713613 [Bug 1713613] rebalance start command doesn't throw up error message if the command fails -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 24 09:14:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 09:14:24 +0000 Subject: [Bugs] [Bug 1699176] rebalance start command doesn't throw up error message if the command fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1699176 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1713613 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1713613 [Bug 1713613] rebalance start command doesn't throw up error message if the command fails -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 09:14:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 09:14:28 +0000 Subject: [Bugs] [Bug 1713613] rebalance start command doesn't throw up error message if the command fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713613 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 24 09:16:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 09:16:10 +0000 Subject: [Bugs] [Bug 1713613] rebalance start command doesn't throw up error message if the command fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713613 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST Assignee|amukherj at redhat.com |srakonde at redhat.com --- Comment #2 from Sanju --- upstream patch: https://review.gluster.org/22547 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 24 15:01:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:01:14 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #670 from Worker Ant --- REVIEW: https://review.gluster.org/22754 (tests: Fix spurious failures in ta-write-on-bad-brick.t) merged (#2) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 15:05:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:05:30 +0000 Subject: [Bugs] [Bug 1713429] My personal blog contenting is not feeding to https://planet.gluster.org/ In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713429 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged CC| |dkhandel at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 15:08:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:08:12 +0000 Subject: [Bugs] [Bug 1713260] Using abrt-action-analyze-c on core dumps on CI In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713260 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged CC| |dkhandel at redhat.com, | |ravishankar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 15:08:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:08:41 +0000 Subject: [Bugs] [Bug 1711950] Account in download.gluster.org to upload the build packages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711950 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Flags|needinfo? | |needinfo?(mscherer at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 24 15:09:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:09:16 +0000 Subject: [Bugs] [Bug 1711945] create account on download.gluster.org In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711945 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 23 11:23:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 11:23:47 +0000 Subject: [Bugs] [Bug 1713307] ganesh-nfs didn't failback when writing files on Mac nfs client if the power is shut down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713307 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged CC| |jthottan at redhat.com, | |skoduri at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 15:12:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:12:48 +0000 Subject: [Bugs] [Bug 1712668] Remove-brick shows warning cluster.force-migration enabled where as cluster.force-migration is disabled on the volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712668 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Flags| |needinfo?(sacharya at redhat.c | |om) --- Comment #1 from Ravishankar N --- Shwetha, please assign this to yourself if you are working on this bug. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 15:14:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:14:16 +0000 Subject: [Bugs] [Bug 1711400] Dispersed volumes leave open file descriptors on nodes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711400 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged CC| |aspandey at redhat.com, | |jahernan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 15:20:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:20:37 +0000 Subject: [Bugs] [Bug 1710744] [FUSE] Endpoint is not connected after "Found anomalies" error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710744 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged CC| |kkeithle at redhat.com, | |ndevos at redhat.com, | |rgowdapp at redhat.com --- Comment #2 from Ravishankar N --- Adding protocol/client MAINTAINERS to the BZ by looking at the backtrace in the bug description. It might not be related to the client xlator but they can bring it to the attention of other component owners if needed. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 15:23:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:23:53 +0000 Subject: [Bugs] [Bug 1713307] ganesh-nfs didn't failback when writing files on Mac nfs client if the power is shut down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713307 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(guol-fnst at cn.fuji | |tsu.com) --- Comment #1 from Soumya Koduri --- Can you please collect packet traces from all the machines (Node-1, Node-2 and especially from the client machine) while repeating this test for just that single file (i.e, FileA). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 15:26:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:26:49 +0000 Subject: [Bugs] [Bug 1709959] Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged CC| |khiremat at redhat.com, | |moagrawa at redhat.com, | |pgurusid at redhat.com, | |ravishankar at redhat.com --- Comment #9 from Ravishankar N --- Adding folks who have worked on posix health-checker and containers to see if there is something that needs fixing in gluster. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 15:29:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:29:00 +0000 Subject: [Bugs] [Bug 1708531] gluster rebalance status brain splits In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708531 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 15:29:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:29:28 +0000 Subject: [Bugs] [Bug 1708505] [EC] /tests/basic/ec/ec-data-heal.t is failing as heal is not happening properly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708505 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Assignee|bugs at gluster.org |aspandey at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 15:30:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:30:01 +0000 Subject: [Bugs] [Bug 1707866] Thousands of duplicate files in glusterfs mountpoint directory listing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707866 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 15:43:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:43:24 +0000 Subject: [Bugs] [Bug 1706842] Hard Failover with Samba and Glusterfs fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706842 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 15:43:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:43:57 +0000 Subject: [Bugs] [Bug 1706716] glusterd generated core while running ./tests/bugs/cli/bug-1077682.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1706716 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 24 15:44:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:44:42 +0000 Subject: [Bugs] [Bug 1705351] glusterfsd crash after days of running In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1705351 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 15:45:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:45:30 +0000 Subject: [Bugs] [Bug 1703435] gluster-block: Upstream Jenkins job which get triggered at PR level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703435 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 24 15:45:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 15:45:42 +0000 Subject: [Bugs] [Bug 1703433] gluster-block: setup GCOV & LCOV job In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1703433 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 24 16:05:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 16:05:48 +0000 Subject: [Bugs] [Bug 1709959] Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 --- Comment #10 from Mohit Agrawal --- Hi, As per log it is showing aio_write is failed due to underline file system so brick process assumes underline file system is not healthy and kill itself. Brick process kills itself only while every brick is running as an independent process it means brick_multiplex is not enabled otherwise brick process send detach event to respective brick to detach only specific brick. For container envrionment, we do recommend enable brick multiplex to attach multiple bricks with a single brick process. There are two ways to avoid it 1) Either enable brick_multiplex Or 2) Disable health check thread(to disable health check thread you can configure health-check-interval to 0. Thanks, Mohit Agrawal -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 16:35:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 16:35:31 +0000 Subject: [Bugs] [Bug 1713730] Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713730 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Version|unspecified |mainline Keywords| |Triaged Component|rpc |rpc CC| |bugs at gluster.org, | |ravishankar at redhat.com Assignee|rgowdapp at redhat.com |bugs at gluster.org QA Contact|rhinduja at redhat.com | Product|Red Hat Gluster Storage |GlusterFS --- Comment #3 from Ravishankar N --- You used the wrong Product type. Fixing it now. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 16:56:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 16:56:13 +0000 Subject: [Bugs] [Bug 1709959] Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 --- Comment #11 from Jeff Bischoff --- Mohit, thanks for your feedback! We have been testing solution #2, and so far so good. We haven't rolled that changed out to all our environments yet, but we haven't had an incident yet on the environments that do have the health checks disabled. Regarding solution #1, we appreciate the suggest and will try out multiplexing. Just to confirm, what would the expected behavior be if we had multiplexing turned on and then ran into the same aio_write failure? You say the process will send a detach even to the specific brick. What happens after that? Will the brick eventually be automatically reattached? Or will it still require admin intervention, even with multiplexing? I'm assuming its not wise to choose both solution #1 and #2? Better to just pick one? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 24 17:13:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 24 May 2019 17:13:26 +0000 Subject: [Bugs] [Bug 1713730] Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713730 --- Comment #4 from Amgad --- Let me know if any action on my side for code submission! -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 25 02:00:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 25 May 2019 02:00:47 +0000 Subject: [Bugs] [Bug 1709959] Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 --- Comment #12 from Mohit Agrawal --- After enabling multiplex health_check_thread will send detach request for specific brick but brick will not reattach automatically. Yes, you are right admin intervention requires to attach brick again, glusterd does not attach brick automatically unless the user does not start volume. To avoid this you don't' need to choose both options but there are several other benefits of enabling brick multiplex. To know more please follow this https://gluster.home.blog/2019/05/06/why-brick-multiplexing/ If you just want to avoid this situation you can use 2nd option. Thanks, Mohit Agrawal -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat May 25 04:53:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 25 May 2019 04:53:26 +0000 Subject: [Bugs] [Bug 1708156] ec ignores lock contention notifications for partially acquired locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708156 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-25 04:53:26 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22690 (cluster/ec: honor contention notifications for partially acquired locks) merged (#5) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun May 26 08:20:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 26 May 2019 08:20:36 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22772 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun May 26 08:20:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 26 May 2019 08:20:37 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #671 from Worker Ant --- REVIEW: https://review.gluster.org/22772 ([WIP]glusterd-volgen.c: remove BD xlator from the graph) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 23 18:19:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 23 May 2019 18:19:31 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1626 from Worker Ant --- REVIEW: https://review.gluster.org/22767 (Fix some \"Null pointer dereference\" coverity issues) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Sun May 26 14:05:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 26 May 2019 14:05:36 +0000 Subject: [Bugs] [Bug 1713391] Access to wordpress instance of gluster.org required for release management In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713391 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(atumball at redhat.c | |om) | --- Comment #2 from Amar Tumballi --- Ack, Approved by me, too! -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 01:00:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 01:00:54 +0000 Subject: [Bugs] [Bug 1348072] Backups for Gerrit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1348072 sankarshan changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sankarshan at redhat.com Flags| |needinfo?(mscherer at redhat.c | |om) --- Comment #5 from sankarshan --- Michael, is this being planned in the near future? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 01:47:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 01:47:48 +0000 Subject: [Bugs] [Bug 1693385] request to change the version of fedora in fedora-smoke-job In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693385 sankarshan changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sankarshan at redhat.com --- Comment #4 from sankarshan --- (In reply to Amar Tumballi from comment #3) > Agree, I was asking for a job without DEBUG mainly because a few times, > there may be warning without DEBUG being there during compile (ref: > https://review.gluster.org/22347 && https://review.gluster.org/22389 ) > > As I had --enable-debug while testing locally, never saw the warning, and > none of the smoke tests captured the error. If we had a job without > --enable-debug, we could have seen the warning while compiling, which would > have failed Smoke. Is the request here to have a job without --enable-debug? Attempting to understand this because there has not been much updates or, clarity on the work. Also, Fedora 30 is now GA - https://fedoramagazine.org/announcing-fedora-30/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 01:52:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 01:52:40 +0000 Subject: [Bugs] [Bug 1504713] Move planet build to be triggered by Jenkins In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1504713 sankarshan changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sankarshan at redhat.com Flags| |needinfo?(mscherer at redhat.c | |om) --- Comment #2 from sankarshan --- (In reply to M. Scherer from comment #1) > I also wonder if we could integrate jenkins with github, to replace the > travis build. It tend to change underneat us and may surprise users. > > Ideally, I also would like a system that open ticket when a feed fail for > too long. Is this now in place? The last time I checked with Deepshikha, there was a refrain of the cron behind the planet scripts failing randomly. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 01:54:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 01:54:11 +0000 Subject: [Bugs] [Bug 1514365] Generate report to identify first time contributors In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1514365 sankarshan changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dkhandel at redhat.com Flags| |needinfo?(dkhandel at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 27 02:09:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 02:09:34 +0000 Subject: [Bugs] [Bug 1665361] Alerts for offline nodes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665361 sankarshan changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dkhandel at redhat.com, | |sankarshan at redhat.com Flags| |needinfo?(dkhandel at redhat.c | |om) --- Comment #2 from sankarshan --- Is there any decision on whether Option#1 can be implemented? Deepshikha, can we have Naresh to look into this? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 02:10:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 02:10:24 +0000 Subject: [Bugs] [Bug 1678378] Add a nightly build verification job in Jenkins for release-6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1678378 sankarshan changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dkhandel at redhat.com, | |sankarshan at redhat.com Flags| |needinfo?(dkhandel at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 02:11:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 02:11:52 +0000 Subject: [Bugs] [Bug 1692349] gluster-csi-containers job is failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692349 sankarshan changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |sankarshan at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-27 02:11:52 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 02:39:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 02:39:45 +0000 Subject: [Bugs] [Bug 1713391] Access to wordpress instance of gluster.org required for release management In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713391 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged CC| |ravishankar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 03:13:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 03:13:54 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #672 from Worker Ant --- REVIEW: https://review.gluster.org/22755 (tests: Add history api tests) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 03:36:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 03:36:22 +0000 Subject: [Bugs] [Bug 1713391] Access to wordpress instance of gluster.org required for release management In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713391 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dkhandel at redhat.com, | |mscherer at redhat.com Flags| |needinfo?(mscherer at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 03:43:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 03:43:51 +0000 Subject: [Bugs] [Bug 1489325] Place to host gerritstats In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1489325 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com Flags|needinfo?(sankarshan at redhat |needinfo?(mscherer at redhat.c |.com) |om) |needinfo?(dkhandel at redhat.c | |om) | --- Comment #4 from Deepshikha khandelwal --- I need more info about this bug. What kind of stats do we want? What would be the end result of this? It will be altogether a hosted service. We can have it in the cage or on the same as gerrit machine. @misc thoughts? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 03:50:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 03:50:03 +0000 Subject: [Bugs] [Bug 1678378] Add a nightly build verification job in Jenkins for release-6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1678378 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(dkhandel at redhat.c | |om) | --- Comment #2 from Deepshikha khandelwal --- No, we have a nightly master job. Will add this job. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 03:53:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 03:53:59 +0000 Subject: [Bugs] [Bug 1693385] request to change the version of fedora in fedora-smoke-job In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693385 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(atumball at redhat.c | |om) | --- Comment #5 from Amar Tumballi --- First request in this bug is: * Change the version of Fedora (currently 28) to 30. - This can't be done without some head start time, because there are some 20-30 warnings with newer compiler version. We need to fix it before making the job to vote. - Best way is to have a job which doesn't vote (skip), but reports failure/success for at least a week or so. In that time, we fix all the warning, make the job GREEN, and then make it vote. * `--enable-debug` is used everywhere in smoke tests, but release bits doesn't involve the same. Hence, I was thinking of having at least 1 of the smoke jobs not have it. Probably we should consider opening another bug for the same. It can be done in fedora-smoke, or even in centos-smoke. doesn't matter. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 04:00:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 04:00:05 +0000 Subject: [Bugs] [Bug 1665361] Alerts for offline nodes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665361 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(dkhandel at redhat.c | |om) | --- Comment #3 from Deepshikha khandelwal --- According to me we should have it on nagios rather than alerting jenkins job. Nagios is already in place for builders to alert about any memory failures or so. Though I don't receive notifications (that's a different story) but would be good to have just one such source of alerting. Naresh can look at the script if we agree on this. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 04:12:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 04:12:07 +0000 Subject: [Bugs] [Bug 1663780] On docs.gluster.org, we should convert spaces in folder or file names to 301 redirects to hypens In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663780 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |dkhandel at redhat.com Flags|needinfo?(dkhandel at redhat.c | |om) | --- Comment #3 from Deepshikha khandelwal --- I can take this up once I gather enough understanding of how this can be done. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 05:39:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 05:39:51 +0000 Subject: [Bugs] [Bug 1713730] Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713730 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST Assignee|bugs at gluster.org |amgad.saleh at nokia.com --- Comment #5 from Ravishankar N --- https://review.gluster.org/#/c/glusterfs/+/22769/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 06:24:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 06:24:20 +0000 Subject: [Bugs] [Bug 1714098] New: Make debugging hung frames easier Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714098 Bug ID: 1714098 Summary: Make debugging hung frames easier Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: pkarampu at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: At the moment new stack doesn't populate frame->root->unique in all cases. This makes it difficult to debug hung frames by examining successive state dumps. Fuse and server xlator populate it whenever they can, but other xlators won't be able to assign one when they need to create a new frame/stack. What we need is for unique to be correct. If a stack with same unique is present in successive statedumps, that means the same operation is still in progress. This makes finding hung frames part of debugging hung frames easier. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 06:27:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 06:27:35 +0000 Subject: [Bugs] [Bug 1714098] Make debugging hung frames easier In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714098 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22773 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 06:27:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 06:27:36 +0000 Subject: [Bugs] [Bug 1714098] Make debugging hung frames easier In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714098 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22773 (stack: Make sure to have unique call-stacks in all cases) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 07:24:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 07:24:27 +0000 Subject: [Bugs] [Bug 1711764] Files inaccessible if one rebalance process is killed in a multinode volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711764 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 27 07:40:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 07:40:04 +0000 Subject: [Bugs] [Bug 1714124] New: Files inaccessible if one rebalance process is killed in a multinode volume Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714124 Bug ID: 1714124 Summary: Files inaccessible if one rebalance process is killed in a multinode volume Product: Red Hat Gluster Storage Version: rhgs-3.4 Status: NEW Component: distribute Assignee: spalai at redhat.com Reporter: nbalacha at redhat.com QA Contact: tdesala at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1711764 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1711764 +++ Description of problem: This is a consequence of https://review.gluster.org/#/c/glusterfs/+/17239/ and lookup-optimize being enabled. Rebalance directory processing steps on each node: 1. Set new layout on directory without the commit hash 2. List files on that local subvol. Migrate those files which fall into its bucket. Lookups are performed on the files only if it is determined that it is to be migrated by the process. 3. When done, update the layout on the local subvol with the layout containing the commit hash. When there are multiple rebalance processes processing the same directory, they finish at different times and one process can update the layout with the commit hash before the others are done listing and migrating their files. Clients will therefore see a complete layout even before all files have been looked up according to the new layout causing file access to fail. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Create a 2x2 volume spanning 2 nodes. Create some directories and files on it. 2. Add 2 bricks to convert it to a 3x2 volume. 3. Start a rebalance on the volume and break into one rebalance process before it starts processing the directories. 4. Allow the second rebalance process to complete. Kill the process that is blocked by gdb. 5. Mount the volume and try to stat the files without listing the directories. Actual results: The stat will fail for several files with the error : stat: cannot stat ??: No such file or directory Expected results: Additional info: --- Additional comment from Nithya Balachandran on 2019-05-20 05:05:30 UTC --- The easiest solution is to have each node do the file lookups before the call to gf_defrag_should_i_migrate. Pros: Simple Cons: Will introduce more lookups but is pretty much the same as the number seen before https://review.gluster.org/#/c/glusterfs/+/17239/ --- Additional comment from Worker Ant on 2019-05-20 10:01:20 UTC --- REVIEW: https://review.gluster.org/22746 (cluster/dht: Lookup all files when processing directory) posted (#1) for review on master by N Balachandran Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1711764 [Bug 1711764] Files inaccessible if one rebalance process is killed in a multinode volume -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 27 07:40:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 07:40:04 +0000 Subject: [Bugs] [Bug 1711764] Files inaccessible if one rebalance process is killed in a multinode volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711764 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1714124 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1714124 [Bug 1714124] Files inaccessible if one rebalance process is killed in a multinode volume -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 27 07:40:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 07:40:07 +0000 Subject: [Bugs] [Bug 1714124] Files inaccessible if one rebalance process is killed in a multinode volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714124 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 27 07:40:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 07:40:42 +0000 Subject: [Bugs] [Bug 1714124] Files inaccessible if one rebalance process is killed in a multinode volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714124 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|spalai at redhat.com |nbalacha at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 27 07:41:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 07:41:26 +0000 Subject: [Bugs] [Bug 1714124] Files inaccessible if one rebalance process is killed in a multinode volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714124 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Severity|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 27 07:43:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 07:43:45 +0000 Subject: [Bugs] [Bug 1714124] Files inaccessible if one rebalance process is killed in a multinode volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714124 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 27 07:51:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 07:51:06 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #40 from Worker Ant --- REVIEW: https://review.gluster.org/22664 (glusterd/tier: remove tier related code from glusterd) merged (#8) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 09:49:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 09:49:24 +0000 Subject: [Bugs] [Bug 1659857] change max-port value in glusterd vol file to 60999 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659857 Anjana Suparna Sriram changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1714166 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 10:02:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 10:02:40 +0000 Subject: [Bugs] [Bug 1714172] New: ec ignores lock contention notifications for partially acquired locks Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714172 Bug ID: 1714172 Summary: ec ignores lock contention notifications for partially acquired locks Product: GlusterFS Version: 6 Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: jahernan at redhat.com CC: bugs at gluster.org Depends On: 1708156 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1708156 +++ Description of problem: When an inodelk is being acquired, it could happen that some bricks have already granted the lock while others don't. From the point of view of ec, the lock is not yet acquired. If at this point one of the bricks that has already granted the lock receives another inodelk request, it will send a contention notification to ec. Currently ec ignores those notifications until the lock is fully acquired. This means than once ec acquires the lock on all bricks, it won't be released immediately when eager-lock is used. Version-Release number of selected component (if applicable): mainline How reproducible: Very frequently when there are multiple concurrent operations on same directory Steps to Reproduce: 1. Create a disperse volume 2. Mount it from several clients 3. Create few files on a directory 4. Do 'ls' of that directory at the same time from all clients Actual results: Some 'ls' take several seconds to complete Expected results: All 'ls' should complete in less than a second Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1708156 [Bug 1708156] ec ignores lock contention notifications for partially acquired locks -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 10:02:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 10:02:40 +0000 Subject: [Bugs] [Bug 1708156] ec ignores lock contention notifications for partially acquired locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708156 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1714172 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1714172 [Bug 1714172] ec ignores lock contention notifications for partially acquired locks -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 27 10:03:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 10:03:17 +0000 Subject: [Bugs] [Bug 1714172] ec ignores lock contention notifications for partially acquired locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714172 Xavi Hernandez changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |jahernan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 10:13:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 10:13:28 +0000 Subject: [Bugs] [Bug 1714172] ec ignores lock contention notifications for partially acquired locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714172 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22774 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 27 10:13:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 10:13:29 +0000 Subject: [Bugs] [Bug 1714172] ec ignores lock contention notifications for partially acquired locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714172 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22774 (cluster/ec: honor contention notifications for partially acquired locks) posted (#1) for review on release-6 by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 27 14:36:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 14:36:48 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #673 from Worker Ant --- REVIEW: https://review.gluster.org/22760 (tests: Add changelog api tests) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 14:59:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 14:59:19 +0000 Subject: [Bugs] [Bug 1711250] bulkvoldict thread is not handling all volumes while brick multiplex is enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711250 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-27 14:59:19 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22739 (glusterd: bulkvoldict thread is not handling all volumes) merged (#2) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon May 27 15:35:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 15:35:48 +0000 Subject: [Bugs] [Bug 1637652] Glusterd2 is not cleaning itself In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1637652 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED CC| |atumball at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-05-27 15:35:48 --- Comment #4 from Amar Tumballi --- Not working actively on glusterd2, and hence marking it as DEFERRED. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 15:36:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 15:36:59 +0000 Subject: [Bugs] [Bug 1696075] Client lookup is unable to heal missing directory GFID entry In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1696075 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |NEW CC| |atumball at redhat.com --- Comment #4 from Amar Tumballi --- Marking it back to NEW, as the assignee is still bugs at gluster.org -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 15:37:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 15:37:53 +0000 Subject: [Bugs] [Bug 1697971] Segfault in FUSE process, potential use after free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1697971 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |NEW --- Comment #21 from Amar Tumballi --- Marking it back to NEW, as the assignee is still bugs at gluster.org -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 15:39:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 15:39:13 +0000 Subject: [Bugs] [Bug 1698861] Renaming a directory when 2 bricks of multiple disperse subvols are down leaves both old and new dirs on the bricks. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1698861 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |NEW CC| |atumball at redhat.com Assignee|bugs at gluster.org |aspandey at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 15:58:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 15:58:03 +0000 Subject: [Bugs] [Bug 1622665] clang-scan report: glusterfs issues In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1622665 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22775 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 15:58:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 15:58:04 +0000 Subject: [Bugs] [Bug 1622665] clang-scan report: glusterfs issues In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1622665 --- Comment #91 from Worker Ant --- REVIEW: https://review.gluster.org/22775 (across: clang-scan: fix NULL dereferencing warnings) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 16:06:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 16:06:30 +0000 Subject: [Bugs] [Bug 1525807] posix: brick process crashes during virtual getxattr() In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1525807 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |WORKSFORME Last Closed| |2019-05-27 16:06:30 --- Comment #2 from Amar Tumballi --- Not happening anymore. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 16:07:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 16:07:53 +0000 Subject: [Bugs] [Bug 1528983] build: changing verbosity is broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1528983 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-27 16:07:53 --- Comment #2 from Amar Tumballi --- With glusterfs-6.0 and latest master, this is working properly. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 16:08:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 16:08:30 +0000 Subject: [Bugs] [Bug 1529916] glusterfind doesn't terminate when it fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1529916 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium CC| |atumball at redhat.com Assignee|bugs at gluster.org |sacharya at redhat.com QA Contact|bugs at gluster.org | Severity|unspecified |high -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 16:12:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 16:12:28 +0000 Subject: [Bugs] [Bug 1598326] Setup CI for gluster-block In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1598326 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |WORKSFORME Last Closed| |2019-05-27 16:12:28 --- Comment #4 from Amar Tumballi --- We have travis now in https://github.com/gluster/gluster-block for every PR, and nightly line-coverage tests on jenkins. Should be good to close this issue. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 16:15:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 16:15:14 +0000 Subject: [Bugs] [Bug 1612615] glusterfs-client on armhf crashes writing files to disperse volumes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1612615 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-27 16:15:14 --- Comment #1 from Amar Tumballi --- glusterfs works fine on ARM from glusterfs-6.x releases. There are minor glitches, but not this basic anymore. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 16:15:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 16:15:49 +0000 Subject: [Bugs] [Bug 1612617] glustershd on armhf crashes on disperse volumes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1612617 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x Resolution|--- |WORKSFORME Last Closed| |2019-05-27 16:15:49 --- Comment #1 from Amar Tumballi --- glusterfs works fine on ARM from glusterfs-6.x releases. There are minor glitches, but not this basic anymore. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 16:19:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 16:19:29 +0000 Subject: [Bugs] [Bug 1653250] memory-leak in crypt xlator (glusterfs client) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1653250 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |CANTFIX Last Closed| |2019-05-27 16:19:29 --- Comment #1 from Amar Tumballi --- with latest glusterfs builds, crypt xlator is no more loaded in graph, and is present in codebase. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 16:20:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 16:20:50 +0000 Subject: [Bugs] [Bug 1668118] Failure to start geo-replication for tiered volume. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1668118 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Resolution|--- |CANTFIX Last Closed| |2019-05-27 16:20:50 --- Comment #1 from Amar Tumballi --- We have deprecated 'tier' feature of glusterfs. Hence not possible to fix it in future. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 16:23:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 16:23:15 +0000 Subject: [Bugs] [Bug 1674225] flooding of "dict is NULL" logging & crash of client process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674225 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Fixed In Version| |glusterfs-6.x, | |glusterfs-5.5 Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-27 16:23:15 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 16:23:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 16:23:16 +0000 Subject: [Bugs] [Bug 1667103] GlusterFS 5.4 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667103 Bug 1667103 depends on bug 1674225, which changed state. Bug 1674225 Summary: flooding of "dict is NULL" logging & crash of client process https://bugzilla.redhat.com/show_bug.cgi?id=1674225 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 16:24:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 16:24:48 +0000 Subject: [Bugs] [Bug 1672258] fuse takes memory and doesn't free In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672258 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com --- Comment #2 from Amar Tumballi --- With patch https://review.gluster.org/#/q/Ifee0737b23b12b1426c224ec5b8f591f487d83a2 merged in glusterfs-6.0 and glusterfs-5.5, this should be now fixed. Please upgrade and test it for us. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon May 27 16:29:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 27 May 2019 16:29:22 +0000 Subject: [Bugs] [Bug 1686353] flooding of "dict is NULL" logging In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686353 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.x, | |glusterfs-5.5 Resolution|--- |CURRENTRELEASE Last Closed| |2019-05-27 16:29:22 --- Comment #2 from Amar Tumballi --- This got fixed in glusterfs-5.5 (or glusterfs-6.1). Please upgrade. -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 04:17:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 04:17:34 +0000 Subject: [Bugs] [Bug 1714415] New: Script to make it easier to find hung frames Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714415 Bug ID: 1714415 Summary: Script to make it easier to find hung frames Product: GlusterFS Version: mainline Status: NEW Component: scripts Assignee: bugs at gluster.org Reporter: pkarampu at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Given a directory with statedumps captured at different times if there are any stacks that appear in multiple statedumps, we need a script which prints them. It would make it easier to reduce time to debug these kinds of issues. I developed a script which does this and this is the sample output: glusterdump.25425.dump repeats=5 stack=0x7f53642cb968 pid=0 unique=0 lk-owner= glusterdump.25427.dump repeats=5 stack=0x7f85002cb968 pid=0 unique=0 lk-owner= glusterdump.25428.dump repeats=5 stack=0x7f962c2cb968 pid=0 unique=0 lk-owner= glusterdump.25428.dump repeats=2 stack=0x7f962c329f18 pid=60830 unique=0 lk-owner=88f50620967f0000 glusterdump.25429.dump repeats=5 stack=0x7f20782cb968 pid=0 unique=0 lk-owner= glusterdump.25472.dump repeats=5 stack=0x7f27ac2cb968 pid=0 unique=0 lk-owner= glusterdump.25473.dump repeats=5 stack=0x7f4fbc2cb9d8 pid=0 unique=0 lk-owner= NOTE: stacks with lk-owner=""/lk-owner=0000000000000000/unique=0 may not be hung frames and need further inspection Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 04:20:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 04:20:02 +0000 Subject: [Bugs] [Bug 1714415] Script to make it easier to find hung frames In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714415 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22777 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 04:20:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 04:20:03 +0000 Subject: [Bugs] [Bug 1714415] Script to make it easier to find hung frames In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714415 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22777 (scripts: Find hung frames given a directory with statedumps) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 05:53:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 05:53:49 +0000 Subject: [Bugs] [Bug 1712668] Remove-brick shows warning cluster.force-migration enabled where as cluster.force-migration is disabled on the volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712668 Shwetha K Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |sacharya at redhat.com Flags|needinfo?(sacharya at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 06:25:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 06:25:21 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22778 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 06:25:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 06:25:23 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1627 from Worker Ant --- REVIEW: https://review.gluster.org/22778 (glusterd: coverity fix) posted (#1) for review on master by Sanju Rakonde -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 07:15:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 07:15:40 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22779 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 07:15:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 07:15:40 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #41 from Worker Ant --- REVIEW: https://review.gluster.org/22779 (lcov: improve line coverage) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 07:32:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 07:32:13 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22780 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 07:32:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 07:32:14 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #42 from Worker Ant --- REVIEW: https://review.gluster.org/22780 (code-coverage: improve it on shard, trace and posix xlators) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 10:05:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 10:05:54 +0000 Subject: [Bugs] [Bug 1686353] flooding of "dict is NULL" logging In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686353 --- Comment #3 from ryan at magenta.tv --- Hi Amar, Has this been back-ported to Gluster 4? -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 10:10:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 10:10:57 +0000 Subject: [Bugs] [Bug 1678378] Add a nightly build verification job in Jenkins for release-6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1678378 --- Comment #3 from Deepshikha khandelwal --- Pushed a change to add this job too: https://review.gluster.org/#/c/build-jobs/+/22781/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 10:17:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 10:17:39 +0000 Subject: [Bugs] [Bug 1686353] flooding of "dict is NULL" logging In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686353 --- Comment #4 from Amar Tumballi --- Ryan, considering there was no release of 4.1.x branch, this was not backported. Best thing is to upgrade to 5.6 or 6.1 right now. -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 10:41:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 10:41:22 +0000 Subject: [Bugs] [Bug 1714526] New: Geo-re: Geo replication failing in "cannot allocate memory" Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714526 Bug ID: 1714526 Summary: Geo-re: Geo replication failing in "cannot allocate memory" Product: Red Hat Gluster Storage Hardware: x86_64 OS: Linux Status: NEW Component: geo-replication Keywords: ZStream Severity: medium Priority: medium Assignee: sunkumar at redhat.com Reporter: rhinduja at redhat.com QA Contact: rhinduja at redhat.com CC: abhishku at redhat.com, avishwan at redhat.com, bkunal at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, skandark at redhat.com, smulay at redhat.com, storage-qa-internal at redhat.com, sunkumar at redhat.com Depends On: 1670429, 1693648, 1694002 Target Milestone: --- Classification: Red Hat Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693648 [Bug 1693648] Geo-re: Geo replication failing in "cannot allocate memory" https://bugzilla.redhat.com/show_bug.cgi?id=1694002 [Bug 1694002] Geo-re: Geo replication failing in "cannot allocate memory" -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 28 10:41:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 10:41:22 +0000 Subject: [Bugs] [Bug 1694002] Geo-re: Geo replication failing in "cannot allocate memory" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694002 Rahul Hinduja changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1714526 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1714526 [Bug 1714526] Geo-re: Geo replication failing in "cannot allocate memory" -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 28 10:41:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 10:41:25 +0000 Subject: [Bugs] [Bug 1714526] Geo-re: Geo replication failing in "cannot allocate memory" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714526 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 28 10:43:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 10:43:23 +0000 Subject: [Bugs] [Bug 1714526] Geo-re: Geo replication failing in "cannot allocate memory" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714526 Rahul Hinduja changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DUPLICATE Last Closed| |2019-05-28 10:43:23 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 28 11:03:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 11:03:59 -0000 Subject: [Bugs] [Bug 1707200] VM stuck in a shutdown because of a pending fuse request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1707200 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-28 11:03:55 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22670 (performance/write-behind: remove request from wip list in wb_writev_cbk) merged (#2) on release-4.1 by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 11:08:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 11:08:42 +0000 Subject: [Bugs] [Bug 1714536] New: geo-rep: With heavy rename workload geo-rep log if flooded Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 Bug ID: 1714536 Summary: geo-rep: With heavy rename workload geo-rep log if flooded Product: Red Hat Gluster Storage Status: NEW Component: geo-replication Severity: medium Assignee: sunkumar at redhat.com Reporter: rhinduja at redhat.com QA Contact: rhinduja at redhat.com CC: avishwan at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1709653 Target Milestone: --- Classification: Red Hat Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1709653 [Bug 1709653] geo-rep: With heavy rename workload geo-rep log if flooded -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 28 11:08:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 11:08:42 +0000 Subject: [Bugs] [Bug 1709653] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709653 Rahul Hinduja changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1714536 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 28 11:08:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 11:08:47 +0000 Subject: [Bugs] [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 28 11:50:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 11:50:43 +0000 Subject: [Bugs] [Bug 1694002] Geo-re: Geo replication failing in "cannot allocate memory" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694002 Bipin Kunal changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1670429 | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 28 11:53:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 11:53:12 +0000 Subject: [Bugs] [Bug 1694002] Geo-re: Geo replication failing in "cannot allocate memory" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694002 Bipin Kunal changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1670429 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 28 12:04:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 12:04:42 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22783 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 12:04:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 12:04:44 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1628 from Worker Ant --- REVIEW: https://review.gluster.org/22783 (io-cache: remove a unused iov_copy) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue May 28 12:34:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 12:34:39 +0000 Subject: [Bugs] [Bug 1714536] geo-rep: With heavy rename workload geo-rep log if flooded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714536 Bipin Kunal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |bkunal at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 28 17:10:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 17:10:43 +0000 Subject: [Bugs] [Bug 1713730] Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713730 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22769 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue May 28 17:10:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 28 May 2019 17:10:44 +0000 Subject: [Bugs] [Bug 1713730] Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713730 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-28 17:10:44 --- Comment #6 from Worker Ant --- REVIEW: https://review.gluster.org/22769 (If bind-address is IPv6 return it successfully) merged (#6) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 29 03:02:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 03:02:02 +0000 Subject: [Bugs] [Bug 1665361] Alerts for offline nodes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665361 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |narekuma at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 29 04:03:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 04:03:01 +0000 Subject: [Bugs] [Bug 1714851] New: issues with 'list.h' elements in clang-scan Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714851 Bug ID: 1714851 Summary: issues with 'list.h' elements in clang-scan Product: GlusterFS Version: mainline Status: NEW Component: core Severity: low Priority: low Assignee: bugs at gluster.org Reporter: atumball at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Visit : https://build.gluster.org/job/clang-scan (and https://build.gluster.org/job/clang-scan/lastCompletedBuild/clangScanBuildBugs/ if the link still has the bugs listed). There are quite a few of issues listed, which seems to be false-positive, and are 'list' related. Shyam's points: Posting an older context from when I was looking at certain list related clang scan issues (see below). There is a class of clang scan related issues where using certain list macros that are not empty list safe is present in the code. I am going to assume this maybe one of those cases, if not please ignore this comment. Clang bug type: Result of operation is garbage or undefined Where: glusterfs/api/src/glfs-fops.c glfs_recall_lease_fd 5208 Other instances (there are more but not analyzed) glusterfs/xlators/features/locks/src/clear.c clrlk_clear_inodelk 257 glusterfs/xlators/features/locks/src/clear.c clrlk_clear_entrylk 360 glusterfs/xlators/features/locks/src/entrylk.c pl_entrylk_client_cleanup 1075 glusterfs/xlators/features/locks/src/inodelk.c pl_inodelk_client_cleanup 694 Reason: list_for_each_entry_safe is not "Empty list" safe The places where this is caught is when we use uninitialized stack variables (for the most part), but in reality the pattern is incorrect and needs correction. We need to take a few actions here: - Review the list macro usage across the code - Update the list.h to a more later version from the kernel sources? - Later kernel sources call out the unsafe nature of the same - list_first_entry: https://github.com/torvalds/linux/blob/cd6c84d8f0cdc911df435bb075ba22ce3c605b07/include/linux/list.h#L473-L476 - Calls out "Note, that list is expected to be not empty" - list_first_entry Called from list_for_each_entry_safe: https://github.com/torvalds/linux/blob/cd6c84d8f0cdc911df435bb075ba22ce3c605b07/include/linux/list.h#L649-L654 - Thus making it also needing to adhere to "Note, that list is expected to be not empty" Version-Release number of selected component (if applicable): --- Need to think and handle it in a right way. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 29 06:39:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 06:39:49 +0000 Subject: [Bugs] [Bug 1714895] New: Glusterfs(fuse) client crash Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714895 Bug ID: 1714895 Summary: Glusterfs(fuse) client crash Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: libglusterfsclient Assignee: bugs at gluster.org Reporter: maybeonly at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Created attachment 1574617 --> https://bugzilla.redhat.com/attachment.cgi?id=1574617&action=edit Copied log from /var/log/glusterfs/mount-point.log of the client when crashing Description of problem: One of Glusterfs(fuse) client crashes sometimes Version-Release number of selected component (if applicable): 6.1 (from yum) How reproducible: about once a week Steps to Reproduce: I'm sorry, I don't know Actual results: It crashed. It seems a core file was generated but it failed to be written to the root dir. And I think there's something wrong with this volume, but cannot be healed. Expected results: Additional info: # gluster volume info datavolume3 Volume Name: datavolume3 Type: Replicate Volume ID: 675d3435-e60e-424d-9eb6-dfd7427defdd Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 185***:/***/bricks/datavolume3 Brick2: 237***:/***/bricks/datavolume3 Brick3: 208***:/***/bricks/datavolume3 Options Reconfigured: features.locks-revocation-max-blocked: 3 features.locks-revocation-clear-all: true cluster.entry-self-heal: on cluster.data-self-heal: on cluster.metadata-self-heal: on storage.owner-gid: **** storage.owner-uid: **** auth.allow: ********* nfs.disable: on transport.address-family: inet The attachment is copied from /var/log/glusterfs/mount-point.log of the client I've got a statedump file but I don't know which section is related. The volume(s) were created by gfs v3.8 @ centos6, and then I replaced the servers by new servers with gfs v6.0 @ centos7, and upgraded their gfs to v6.1, and then set cluseter.opver=60000 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 29 10:05:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 10:05:26 +0000 Subject: [Bugs] [Bug 1714973] New: upgrade after tier code removal results in peer rejection. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714973 Bug ID: 1714973 Summary: upgrade after tier code removal results in peer rejection. Product: GlusterFS Version: mainline Status: NEW Component: glusterd Severity: high Assignee: bugs at gluster.org Reporter: hgowtham at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: the tier code was removed as a part of: https://review.gluster.org/#/c/glusterfs/+/22664/ In this we failed to handle the is_tier_enabled. This value is left in the info file for the older volumes. So while upgrading the new gluster node didn't have this as it was removed and the older one had it. This ends in checksum mismatch and so the peer got rejected. Version-Release number of selected component (if applicable): mainline. 6.2 How reproducible: 100% Steps to Reproduce: 1.create a node cluster without the tier code removal patch 2.update one node with the tier code removal patch 3.Once updated the nodes will get rejected. Actual results: The nodes are not supposed to be rejected. Expected results: The nodes get rejected. Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 29 10:06:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 10:06:17 +0000 Subject: [Bugs] [Bug 1714973] upgrade after tier code removal results in peer rejection. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714973 hari gowtham changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |hgowtham at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 29 10:08:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 10:08:23 +0000 Subject: [Bugs] [Bug 1714973] upgrade after tier code removal results in peer rejection. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714973 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22785 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 29 10:08:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 10:08:24 +0000 Subject: [Bugs] [Bug 1714973] upgrade after tier code removal results in peer rejection. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714973 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22785 (glusterd/tier: gluster upgrade broken because of tier) posted (#1) for review on master by hari gowtham -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 29 11:23:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 11:23:06 +0000 Subject: [Bugs] [Bug 1715012] New: Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715012 Bug ID: 1715012 Summary: Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually Product: GlusterFS Version: 6 Status: NEW Component: rpc Keywords: Triaged Assignee: bugs at gluster.org Reporter: hgowtham at redhat.com CC: amgad.saleh at nokia.com, bugs at gluster.org, ravishankar at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com Depends On: 1713730 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1713730 +++ Description of problem: Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually Version-Release number of selected component (if applicable): How reproducible: Configure glusterd with pure IPv6 Steps to Reproduce: 1. 2. 3. Actual results: log: [2019-05-21 06:07:28.121877] T [MSGID: 0] [xlator.c:369:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/6.1/xlator/mgmt/glusterd.so [2019-05-21 06:07:28.123042] T [MSGID: 0] [xlator.c:286:xlator_dynload_apis] 0-xlator: management: method missing (reconfigure) [2019-05-21 06:07:28.123061] T [MSGID: 0] [xlator.c:290:xlator_dynload_apis] 0-xlator: management: method missing (notify) [2019-05-21 06:07:28.123069] T [MSGID: 0] [xlator.c:294:xlator_dynload_apis] 0-xlator: management: method missing (dumpops) [2019-05-21 06:07:28.123075] T [MSGID: 0] [xlator.c:305:xlator_dynload_apis] 0-xlator: management: method missing (dump_metrics) [2019-05-21 06:07:28.123081] T [MSGID: 0] [xlator.c:313:xlator_dynload_apis] 0-xlator: management: method missing (pass_through_fops), falling back to default [2019-05-21 06:07:28.123100] T [MSGID: 0] [graph.y:218:volume_type] 0-parser: Type:management:mgmt/glusterd [2019-05-21 06:07:28.123115] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:working-directory:/var/lib/glusterd [2019-05-21 06:07:28.123124] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport-type:socket,rdma [2019-05-21 06:07:28.123134] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.socket.keepalive-time:10 [2019-05-21 06:07:28.123142] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.socket.keepalive-interval:2 [2019-05-21 06:07:28.123153] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.socket.read-fail-log:off [2019-05-21 06:07:28.123160] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.socket.listen-port:24007 [2019-05-21 06:07:28.123167] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.rdma.listen-port:24008 [2019-05-21 06:07:28.123175] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.address-family:inet6 [2019-05-21 06:07:28.123182] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:ping-timeout:0 [2019-05-21 06:07:28.123191] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:event-threads:1 [2019-05-21 06:07:28.123199] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.socket.bind-address:2001:db81234:e [2019-05-21 06:07:28.123206] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.tcp.bind-address:2001:db81234:e [2019-05-21 06:07:28.123223] T [MSGID: 0] [graph.y:253:volume_option] 0-parser: Option:management:transport.rdma.bind-address:2001:db81234:e [2019-05-21 06:07:28.123233] T [MSGID: 0] [graph.y:324:volume_end] 0-parser: end:management [2019-05-21 06:07:28.123482] I [MSGID: 106478] [glusterd.c:1422:init] 0-management: Maximum allowed open file descriptors set to 65536 [2019-05-21 06:07:28.123541] I [MSGID: 106479] [glusterd.c:1478:init] 0-management: Using /var/lib/glusterd as working directory [2019-05-21 06:07:28.123557] I [MSGID: 106479] [glusterd.c:1484:init] 0-management: Using /var/run/gluster as pid file working directory [2019-05-21 06:07:28.123678] D [MSGID: 0] [glusterd.c:458:glusterd_rpcsvc_options_build] 0-glusterd: listen-backlog value: 1024 [2019-05-21 06:07:28.123710] T [rpcsvc.c:2815:rpcsvc_init] 0-rpc-service: rx pool: 64 [2019-05-21 06:07:28.123739] T [rpcsvc-auth.c:124:rpcsvc_auth_init_auth] 0-rpc-service: Authentication enabled: AUTH_GLUSTERFS [2019-05-21 06:07:28.123746] T [rpcsvc-auth.c:124:rpcsvc_auth_init_auth] 0-rpc-service: Authentication enabled: AUTH_GLUSTERFS-v2 [2019-05-21 06:07:28.123750] T [rpcsvc-auth.c:124:rpcsvc_auth_init_auth] 0-rpc-service: Authentication enabled: AUTH_GLUSTERFS-v3 [2019-05-21 06:07:28.123765] T [rpcsvc-auth.c:124:rpcsvc_auth_init_auth] 0-rpc-service: Authentication enabled: AUTH_UNIX [2019-05-21 06:07:28.123772] T [rpcsvc-auth.c:124:rpcsvc_auth_init_auth] 0-rpc-service: Authentication enabled: AUTH_NULL [2019-05-21 06:07:28.123777] D [rpcsvc.c:2835:rpcsvc_init] 0-rpc-service: RPC service inited. [2019-05-21 06:07:28.123959] D [rpcsvc.c:2337:rpcsvc_program_register] 0-rpc-service: New program registered: GF-DUMP, Num: 123451501, Ver: 1, Port: 0 [2019-05-21 06:07:28.123983] D [rpc-transport.c:293:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/6.1/rpc-transport/socket.so [2019-05-21 06:07:28.127261] T [MSGID: 0] [options.c:141:xlator_option_validate_sizet] 0-management: no range check required for 'option transport.listen-backlog 1024' [2019-05-21 06:07:28.127422] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-management: no range check required for 'option transport.socket.listen-port 24007' [2019-05-21 06:07:28.127487] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-management: no range check required for 'option transport.socket.keepalive-interval 2' [2019-05-21 06:07:28.127513] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-management: no range check required for 'option transport.socket.keepalive-time 10' [2019-05-21 06:07:28.129213] D [socket.c:4505:socket_init] 0-socket.management: Configued transport.tcp-user-timeout=42 [2019-05-21 06:07:28.129231] D [socket.c:4523:socket_init] 0-socket.management: Reconfigued transport.keepalivecnt=9 [2019-05-21 06:07:28.129239] D [socket.c:4209:ssl_setup_connection_params] 0-socket.management: SSL support on the I/O path is NOT enabled [2019-05-21 06:07:28.129244] D [socket.c:4212:ssl_setup_connection_params] 0-socket.management: SSL support for glusterd is NOT enabled [2019-05-21 06:07:28.129268] W [rpcsvc.c:1991:rpcsvc_create_listener] 0-rpc-service: listening on transport failed Expected results: Expected not to fail Additional info: The bug is in below snippet. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually. rpc/rpc-transport/socket/src/name.c /* IPV6 server can handle both ipv4 and ipv6 clients */ for (rp = res; rp != NULL; rp = rp->ai_next) { if (rp->ai_addr == NULL) continue; if (rp->ai_family == AF_INET6) { ?==============1 memcpy(addr, rp->ai_addr, rp->ai_addrlen); *addr_len = rp->ai_addrlen; } } if (!(*addr_len) && res && res->ai_addr) { memcpy(addr, res->ai_addr, res->ai_addrlen); *addr_len = res->ai_addrlen; } else { ?==================2 ret = -1; } freeaddrinfo(res); Issue#667 was opened and a fix was submitted. This is to tag the Gerrit patch with bugzilla ID --- Additional comment from RHEL Product and Program Management on 2019-05-24 16:08:03 UTC --- This bug is automatically being proposed for the next minor release of Red Hat Gluster Storage by setting the release flag 'rhgs?3.5.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. --- Additional comment from Amgad on 2019-05-24 16:26:14 UTC --- Issue should be #677 and PR is #678 --- Additional comment from Ravishankar N on 2019-05-24 16:35:31 UTC --- You used the wrong Product type. Fixing it now. --- Additional comment from Amgad on 2019-05-24 17:13:26 UTC --- Let me know if any action on my side for code submission! --- Additional comment from Ravishankar N on 2019-05-27 05:39:51 UTC --- https://review.gluster.org/#/c/glusterfs/+/22769/ --- Additional comment from Worker Ant on 2019-05-28 17:10:44 UTC --- REVIEW: https://review.gluster.org/22769 (If bind-address is IPv6 return it successfully) merged (#6) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1713730 [Bug 1713730] Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 29 11:23:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 11:23:06 +0000 Subject: [Bugs] [Bug 1713730] Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713730 hari gowtham changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1715012 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1715012 [Bug 1715012] Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 29 11:25:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 11:25:12 +0000 Subject: [Bugs] [Bug 1713730] Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713730 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22786 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 29 11:25:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 11:25:13 +0000 Subject: [Bugs] [Bug 1713730] Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713730 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #7 from Worker Ant --- REVIEW: https://review.gluster.org/22786 (If bind-address is IPv6 return it successfully) posted (#1) for review on release-6 by hari gowtham -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed May 29 11:25:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 11:25:14 +0000 Subject: [Bugs] [Bug 1715012] Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715012 Bug 1715012 depends on bug 1713730, which changed state. Bug 1713730 Summary: Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually https://bugzilla.redhat.com/show_bug.cgi?id=1713730 What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 29 11:38:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 11:38:39 +0000 Subject: [Bugs] [Bug 1715012] Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715012 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22787 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 29 11:38:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 11:38:40 +0000 Subject: [Bugs] [Bug 1715012] Failure when glusterd is configured to bind specific IPv6 address. If bind-address is IPv6, *addr_len will be non-zero and it goes to ret = -1 branch, which will cause listen failure eventually In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715012 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22787 (If bind-address is IPv6 return it successfully) posted (#2) for review on release-6 by Sunny Kumar -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 29 14:53:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 14:53:30 +0000 Subject: [Bugs] [Bug 1709959] Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 --- Comment #13 from Jeff Bischoff --- Thanks for the feedback, Mohit. We will certainly try out those options. So, in your opinion is this functionality working as designed? Should I close this bug? Or is there something in what I described that still bears investigation? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 29 15:26:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 15:26:51 +0000 Subject: [Bugs] [Bug 1709959] Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 --- Comment #14 from Mohit Agrawal --- Yes, please close this bug. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 29 17:47:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 29 May 2019 17:47:38 +0000 Subject: [Bugs] [Bug 1709959] Gluster causing Kubernetes containers to enter crash loop with 'mkdir ... file exists' error message In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1709959 Jeff Bischoff changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |NOTABUG Last Closed| |2019-05-29 17:47:38 --- Comment #15 from Jeff Bischoff --- Closed with workaround provided (disable health checks) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 30 07:33:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 30 May 2019 07:33:23 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #43 from Worker Ant --- REVIEW: https://review.gluster.org/22551 (tests: add tests for different signal handling) merged (#12) on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 30 07:36:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 30 May 2019 07:36:03 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #44 from Worker Ant --- REVIEW: https://review.gluster.org/22779 (marker: remove some unused functions) merged (#5) on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 30 09:40:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 30 May 2019 09:40:17 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1629 from Worker Ant --- REVIEW: https://review.gluster.org/22778 (glusterd: coverity fix) merged (#2) on master by Xavi Hernandez -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Thu May 30 10:16:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 30 May 2019 10:16:35 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22789 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 30 10:16:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 30 May 2019 10:16:35 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #45 from Worker Ant --- REVIEW: https://review.gluster.org/22789 (lcov: improve line coverage) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 30 10:17:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 30 May 2019 10:17:18 +0000 Subject: [Bugs] [Bug 1715422] New: ctime: Upgrade/Enabling ctime feature wrongly updates older files with latest {a|m|c}time Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715422 Bug ID: 1715422 Summary: ctime: Upgrade/Enabling ctime feature wrongly updates older files with latest {a|m|c}time Product: Red Hat Gluster Storage Version: rhgs-3.5 Status: NEW Component: core Assignee: atumball at redhat.com Reporter: khiremat at redhat.com QA Contact: rhinduja at redhat.com CC: bugs at gluster.org, pasik at iki.fi, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1593542 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1593542 +++ Description of problem: Upgrade scenario: Currently for older files, the ctime gets updated during {a|m|c}time modification fop and eventually becomes consistent. With any {a|m|c}time modification, the ctime is initialized with latest time which is incorrect. So how do we handle this upgrade scenario. Version-Release number of selected component (if applicable): mainline How reproducible: Always Steps to Reproduce: 1. Create EC/replica volume, mount it, create a file. 2. Enable ctime feature 3. touch the created file {m|a|c}time will be latest. Only access time should have been updated. Actual results: {a|m|c}time gets updated. Expected results: Only access time should have been updated. Additional info: Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1593542 [Bug 1593542] ctime: Upgrade/Enabling ctime feature wrongly updates older files with latest {a|m|c}time -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 30 10:17:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 30 May 2019 10:17:18 +0000 Subject: [Bugs] [Bug 1593542] ctime: Upgrade/Enabling ctime feature wrongly updates older files with latest {a|m|c}time In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593542 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1715422 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1715422 [Bug 1715422] ctime: Upgrade/Enabling ctime feature wrongly updates older files with latest {a|m|c}time -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 30 10:17:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 30 May 2019 10:17:20 +0000 Subject: [Bugs] [Bug 1715422] ctime: Upgrade/Enabling ctime feature wrongly updates older files with latest {a|m|c}time In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715422 RHEL Product and Program Management changed: What |Removed |Added ---------------------------------------------------------------------------- Rule Engine Rule| |Gluster: set proposed | |release flag for new BZs at | |RHGS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 30 10:17:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 30 May 2019 10:17:46 +0000 Subject: [Bugs] [Bug 1715422] ctime: Upgrade/Enabling ctime feature wrongly updates older files with latest {a|m|c}time In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715422 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|atumball at redhat.com |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu May 30 15:55:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 30 May 2019 15:55:06 +0000 Subject: [Bugs] [Bug 1714098] Make debugging hung frames easier In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714098 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-30 15:55:06 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22773 (stack: Make sure to have unique call-stacks in all cases) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 30 15:56:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 30 May 2019 15:56:11 +0000 Subject: [Bugs] [Bug 1714415] Script to make it easier to find hung frames In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1714415 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-30 15:56:11 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22777 (scripts: Find hung frames given a directory with statedumps) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 30 18:26:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 30 May 2019 18:26:06 +0000 Subject: [Bugs] [Bug 1710054] Optimize the glustershd manager to send reconfigure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710054 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22791 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu May 30 18:26:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 30 May 2019 18:26:07 +0000 Subject: [Bugs] [Bug 1710054] Optimize the glustershd manager to send reconfigure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710054 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22791 (glusterd/svc: Stop stale process using the glusterd_proc_stop) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 03:07:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 03:07:39 +0000 Subject: [Bugs] [Bug 1710159] glusterd: While upgrading (3-node cluster) 'gluster v status' times out on node to be upgraded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710159 --- Comment #3 from Atin Mukherjee --- One correction to the above root cause is process never hangs here. It's just that the response back to the cli is not sent due to the incorrect state machine movement depending on the event trigger. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 03:09:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 03:09:26 +0000 Subject: [Bugs] [Bug 1710159] glusterd: While upgrading (3-node cluster) 'gluster v status' times out on node to be upgraded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710159 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-31 03:09:26 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22730 (glusterd: add an op-version check) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed May 22 08:12:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 22 May 2019 08:12:25 +0000 Subject: [Bugs] [Bug 1712741] glusterd_svcs_stop should call individual wrapper function to stop rather than calling the glusterd_svc_stop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1712741 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-31 03:15:31 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22761 (glusterd/svc: glusterd_svcs_stop should call individual wrapper function) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 31 03:22:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 03:22:28 +0000 Subject: [Bugs] [Bug 1710054] Optimize the glustershd manager to send reconfigure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710054 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22791 (glusterd/svc: Stop stale process using the glusterd_proc_stop) merged (#2) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 03:27:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 03:27:38 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22792 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 03:27:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 03:27:39 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #46 from Worker Ant --- REVIEW: https://review.gluster.org/22792 (lcov: more coverage to shard, old-protocol, sdfs) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 03:31:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 03:31:53 +0000 Subject: [Bugs] [Bug 1713429] My personal blog contenting is not feeding to https://planet.gluster.org/ In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713429 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(dkhandel at redhat.c | |om) --- Comment #2 from Atin Mukherjee --- Any update on this? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 31 07:09:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 07:09:04 +0000 Subject: [Bugs] [Bug 1689173] slow 'ls' (crawl/readdir) performance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689173 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22793 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 31 07:09:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 07:09:05 +0000 Subject: [Bugs] [Bug 1689173] slow 'ls' (crawl/readdir) performance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689173 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22793 (slow ls issue) posted (#1) for review on master by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 31 07:33:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 07:33:21 +0000 Subject: [Bugs] [Bug 1713429] My personal blog contenting is not feeding to https://planet.gluster.org/ In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1713429 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(dkhandel at redhat.c | |om) | --- Comment #3 from Deepshikha khandelwal --- It's failing because the title field is empty on Rafi's blog feed. @rafi Can you please check. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 31 09:30:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 09:30:30 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #47 from Worker Ant --- REVIEW: https://review.gluster.org/22792 (lcov: more coverage to shard, old-protocol, sdfs) merged (#1) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 11:17:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 11:17:24 +0000 Subject: [Bugs] [Bug 1650095] Regression tests for geo-replication on EC volume is not available. It should be added. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1650095 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-31 11:17:24 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/21650 (tests/geo-rep: Add EC volume test case) merged (#12) on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 31 11:28:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 11:28:15 +0000 Subject: [Bugs] [Bug 1708926] Invalid memory access while executing cleanup_and_exit In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1708926 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-05-21 11:37:12 |2019-05-31 11:28:15 --- Comment #7 from Worker Ant --- REVIEW: https://review.gluster.org/22709 (glusterfsd/cleanup: Protect graph object under a lock) merged (#10) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 11:32:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 11:32:50 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22794 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 11:32:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 11:32:51 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #48 from Worker Ant --- REVIEW: https://review.gluster.org/22794 (tests/geo-rep: Add tests to cover glusterd geo-rep) posted (#1) for review on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 12:05:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 12:05:55 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22795 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 12:05:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 12:05:56 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1630 from Worker Ant --- REVIEW: https://review.gluster.org/22795 (geo-rep/gsyncd: name is not freed in one of the cases) posted (#1) for review on master by Sheetal Pamecha -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 12:46:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 12:46:03 +0000 Subject: [Bugs] [Bug 1710054] Optimize the glustershd manager to send reconfigure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1710054 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-31 12:46:03 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22729 (glusterd/shd: Optimize the glustershd manager to send reconfigure) merged (#8) on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 13:25:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 13:25:57 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #49 from Worker Ant --- REVIEW: https://review.gluster.org/22794 (tests/geo-rep: Add tests to cover glusterd geo-rep) merged (#2) on master by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 14:21:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 14:21:03 +0000 Subject: [Bugs] [Bug 1711297] Optimize glusterd code to copy dictionary in handshake code path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711297 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-05-31 14:21:03 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22742 (glusterd: Optimize code to copy dictionary in handshake code path) merged (#7) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri May 31 16:15:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 16:15:31 +0000 Subject: [Bugs] [Bug 1715921] New: uss.t tests times out with brick-mux regression Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715921 Bug ID: 1715921 Summary: uss.t tests times out with brick-mux regression Product: GlusterFS Version: mainline Status: NEW Component: snapshot Assignee: bugs at gluster.org Reporter: rabhat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: uss.t testcase sometimes fails due to timeout whenever run with brick-mux enabled. One of the results of the failure in the regression run can be found here at [1] Found that, the reason for this is because in the test that leads to the timeout we do the following things. Create a file (aaa in the testcase) Take a snapshot (snap6) Access it via uss Delete the file (aaa) Delete the snapshot (snap6) Again create snapshot (snap6 i.e. same name as the one deleted above) Again access the file 'aaa' expecting it to fail (as the file was not present at the time of snapshot) But we access snap6 in the last test too soon. The previous instance of snap6 might still be going through its cleanup phase causing hangs etc. So, after deletion of the snapshot (snap6) wait to ensure that glusterd recognizes it to be not there, before proceeding with creation of the snapshot with the same name. [1] https://build.gluster.org/job/regression-on-demand-multiplex/613/ Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 16:17:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 16:17:18 +0000 Subject: [Bugs] [Bug 1715921] uss.t tests times out with brick-mux regression In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715921 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22728 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 16:17:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 16:17:19 +0000 Subject: [Bugs] [Bug 1715921] uss.t tests times out with brick-mux regression In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1715921 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22728 (uss: Ensure that snapshot is deleted before creating a new snapshot) posted (#8) for review on master by Raghavendra Bhat -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri May 31 16:49:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 31 May 2019 16:49:17 +0000 Subject: [Bugs] [Bug 1711297] Optimize glusterd code to copy dictionary in handshake code path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1711297 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Improvement, Performance Priority|unspecified |urgent Severity|unspecified |urgent -- You are receiving this mail because: You are on the CC list for the bug.