From bugzilla at redhat.com Thu Aug 1 01:04:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 01:04:52 +0000
Subject: [Bugs] [Bug 1717824] Fencing: Added the tcmu-runner ALUA feature
support but after one of node is rebooted the glfs_file_lock() get stucked
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1717824
Xiubo Li changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags| |needinfo?(spalai at redhat.com
| |)
--- Comment #20 from Xiubo Li ---
When the ret == -1 and then check the errno directly will works for me now.
But I can get both the -EAGAIN and -EBUSY, which only the -EBUSY is expected.
Then the problem is why there will always be -EAGAIN every time before
acquiring the lock ?
Thanks
BRs
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 01:15:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 01:15:46 +0000
Subject: [Bugs] [Bug 1730948] [Glusterfs4.1.9] memory leak in fuse mount
process.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1730948
guolei changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(guol-fnst at cn.fuji |
|tsu.com) |
--- Comment #5 from guolei ---
The other bug is seen with creation/renaming of files/directories at root of
the share. Just for the sake of verifying this bug you may try using
vfs_glusterfs module avoiding operations at root if you can't get hold of
required Samba version.
-> I tried to use vfs_glusterfs module (smb4.8.3) and I found the fuse mount
process consume litte memory.
But the smb process consume much more memory than usual.
If you need more info, Let me know.
Thanks very much.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 01:24:36 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 01:24:36 +0000
Subject: [Bugs] [Bug 1730948] [Glusterfs4.1.9] memory leak in fuse mount
process.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1730948
--- Comment #6 from guolei ---
Here was the output of "top" command, when I accessed volume via smb using
vfs_glusterfs module .
Tasks: 721 total, 2 running, 352 sleeping, 0 stopped, 0 zombie
%Cpu(s): 9.8 us, 9.2 sy, 0.0 ni, 80.4 id, 0.1 wa, 0.0 hi, 0.5 si, 0.0 st
KiB Mem : 98674000 total, 558548 free, 33681528 used, 64433928 buff/cache
KiB Swap: 4194300 total, 4194300 free, 0 used. 62794580 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8843 tom 20 0 16.802g 0.014t 265376 R 91.8 15.0 2380:11 smbd
1721 root 20 0 31.983g 3.419g 16616 S 0.0 3.6 12:45.74 java
4176 root 20 0 4554944 1.149g 7972 S 78.6 1.2 1898:52 glusterfsd
4156 root 20 0 4357068 1.077g 8120 S 61.8 1.1 1858:34 glusterfsd
4182 root 20 0 4223136 1.065g 8104 S 76.6 1.1 1881:06 glusterfsd
4115 root 20 0 4231652 1.032g 8148 S 51.6 1.1 1807:30 glusterfsd
4188 root 20 0 4281684 1.030g 8032 S 61.8 1.1 1802:19 glusterfsd
4122 root 20 0 4223168 1.025g 8268 S 56.6 1.1 1872:42 glusterfsd
4155 root 20 0 4224728 1.017g 8400 S 78.6 1.1 1789:53 glusterfsd
4146 root 20 0 4223168 1.017g 8140 S 56.6 1.1 1868:54 glusterfsd
4131 root 20 0 4228336 1.015g 8196 S 54.3 1.1 1884:54 glusterfsd
4132 root 20 0 4158672 1.015g 8216 S 73.4 1.1 1795:58 glusterfsd
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 02:59:49 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 02:59:49 +0000
Subject: [Bugs] [Bug 1734299] ctime: When healing ctime xattr for legacy
files, if multiple clients access and modify the same file,
the ctime might be updated incorrectly.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734299
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-01 02:59:49
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23131 (posix/ctime: Fix race during lookup
ctime xattr heal) merged (#2) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 02:59:49 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 02:59:49 +0000
Subject: [Bugs] [Bug 1734305] ctime: When healing ctime xattr for legacy
files, if multiple clients access and modify the same file,
the ctime might be updated incorrectly.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734305
Bug 1734305 depends on bug 1734299, which changed state.
Bug 1734299 Summary: ctime: When healing ctime xattr for legacy files, if multiple clients access and modify the same file, the ctime might be updated incorrectly.
https://bugzilla.redhat.com/show_bug.cgi?id=1734299
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 03:15:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 03:15:52 +0000
Subject: [Bugs] [Bug 1735514] New: Open fd heal should filter O_APPEND/O_EXCL
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1735514
Bug ID: 1735514
Summary: Open fd heal should filter O_APPEND/O_EXCL
Product: Red Hat Gluster Storage
Version: rhgs-3.5
Status: ASSIGNED
Component: disperse
Keywords: ZStream
Severity: medium
Priority: medium
Assignee: aspandey at redhat.com
Reporter: sheggodu at redhat.com
QA Contact: nchilaka at redhat.com
CC: aspandey at redhat.com, atumball at redhat.com,
bugs at gluster.org, nchilaka at redhat.com,
pkarampu at redhat.com, rcyriac at redhat.com,
rhs-bugs at redhat.com, sankarshan at redhat.com,
sheggodu at redhat.com, storage-qa-internal at redhat.com,
vdas at redhat.com
Depends On: 1734303, 1733935
Target Milestone: ---
Classification: Red Hat
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1733935
[Bug 1733935] Open fd heal should filter O_APPEND/O_EXCL
https://bugzilla.redhat.com/show_bug.cgi?id=1734303
[Bug 1734303] Open fd heal should filter O_APPEND/O_EXCL
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 03:15:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 03:15:52 +0000
Subject: [Bugs] [Bug 1734303] Open fd heal should filter O_APPEND/O_EXCL
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734303
Sunil Kumar Acharya changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1735514
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1735514
[Bug 1735514] Open fd heal should filter O_APPEND/O_EXCL
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 03:15:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 03:15:52 +0000
Subject: [Bugs] [Bug 1733935] Open fd heal should filter O_APPEND/O_EXCL
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733935
Sunil Kumar Acharya changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1735514
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1735514
[Bug 1735514] Open fd heal should filter O_APPEND/O_EXCL
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 03:15:57 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 03:15:57 +0000
Subject: [Bugs] [Bug 1735514] Open fd heal should filter O_APPEND/O_EXCL
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1735514
RHEL Product and Program Management changed:
What |Removed |Added
----------------------------------------------------------------------------
Rule Engine Rule| |Gluster: set proposed
| |release flag for new BZs at
| |RHGS
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 03:18:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 03:18:46 +0000
Subject: [Bugs] [Bug 1735514] Open fd heal should filter O_APPEND/O_EXCL
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1735514
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
CC| |amukherj at redhat.com
--- Comment #2 from Atin Mukherjee ---
Upstream patch : https://review.gluster.org/#/c/glusterfs/+/23121/
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 03:20:48 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 03:20:48 +0000
Subject: [Bugs] [Bug 1735514] Open fd heal should filter O_APPEND/O_EXCL
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1735514
Sunil Kumar Acharya changed:
What |Removed |Added
----------------------------------------------------------------------------
Priority|medium |high
Severity|medium |high
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 03:27:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 03:27:46 +0000
Subject: [Bugs] [Bug 1735514] Open fd heal should filter O_APPEND/O_EXCL
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1735514
Rejy M Cyriac changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1696809
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 03:27:50 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 03:27:50 +0000
Subject: [Bugs] [Bug 1735514] Open fd heal should filter O_APPEND/O_EXCL
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1735514
RHEL Product and Program Management changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|rhgs-3.5.0? blocker? |rhgs-3.5.0+ blocker+
Rule Engine Rule| |Gluster: Approve release
| |flag for RHGS 3.5.0
Target Release|--- |RHGS 3.5.0
Rule Engine Rule| |666
Rule Engine Rule| |327
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 03:36:21 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 03:36:21 +0000
Subject: [Bugs] [Bug 1734027] glusterd 6.4 memory leaks 2-3 GB per 24h (OOM)
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734027
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Version|unspecified |mainline
CC| |bugs at gluster.org
Component|glusterd |glusterd
Assignee|amukherj at redhat.com |bugs at gluster.org
Resolution|--- |WONTFIX
Product|Red Hat Gluster Storage |GlusterFS
QA Contact|bmekala at redhat.com |
Last Closed| |2019-08-01 03:36:21
--- Comment #3 from Atin Mukherjee ---
3.12 version is EOLed, we have made several fixes related to memory leak, if
this issue persists in the latest releases (release-5 or release-6) kindly
reopen. Since we don't have an active 3.12 version to change the bug from RHGS
to GlusterFS I have to choose the mainline version but actually this isn't
applicable in mainline though.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 04:12:54 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 04:12:54 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23083
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 04:12:56 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 04:12:56 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #737 from Worker Ant ---
REVIEW: https://review.gluster.org/23083 (Multiple files: get trivial stuff
done before lock) merged (#12) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 04:39:02 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 04:39:02 +0000
Subject: [Bugs] [Bug 1732772] Disperse volume : data corruption with
ftruncate data in 4+2 config
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1732772
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|MODIFIED |ON_QA
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 04:39:03 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 04:39:03 +0000
Subject: [Bugs] [Bug 1732776] I/O error on writes to a disperse volume when
replace-brick is executed
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1732776
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|MODIFIED |ON_QA
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 04:39:07 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 04:39:07 +0000
Subject: [Bugs] [Bug 1732779] [GSS] An Input/Output error happens on a
disperse volume when doing unaligned writes to a sparse file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1732779
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|MODIFIED |ON_QA
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 04:39:08 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 04:39:08 +0000
Subject: [Bugs] [Bug 1734303] Open fd heal should filter O_APPEND/O_EXCL
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734303
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|MODIFIED |ON_QA
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 04:43:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 04:43:10 +0000
Subject: [Bugs] [Bug 1732790] fix truncate lock to cover the write in
tuncate clean
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1732790
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|MODIFIED |ON_QA
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 05:32:24 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 05:32:24 +0000
Subject: [Bugs] [Bug 1730948] [Glusterfs4.1.9] memory leak in fuse mount
process.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1730948
Anoop C S changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags| |needinfo?(guol-fnst at cn.fuji
| |tsu.com)
--- Comment #7 from Anoop C S ---
(In reply to guolei from comment #5)
> I tried to use vfs_glusterfs module (smb4.8.3) and I found the fuse mount
> process consume litte memory.
Mostly because you don't have an active connection to the share via FUSE mount
since you switched to using vfs_glusterfs.
> But the smb process consume much more memory than usual.
It will consume more than in the previous situation where FUSE mount was used.
Because the entire glusterfs client stack is getting loaded into smbd and acts
as a client to glusterfs. You will have to figure out by what quantity memory
footprint increased in smbd(when vfs_glusterfs is used) and compare it with
memory of "glusterfs" process recorded from when FUSE mount was shared.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 07:42:06 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 07:42:06 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23139
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 07:42:07 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 07:42:07 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
--- Comment #70 from Worker Ant ---
REVIEW: https://review.gluster.org/23139 (lcov: check for zerofill/discard fops
on arbiter) posted (#1) for review on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 10:40:57 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 10:40:57 +0000
Subject: [Bugs] [Bug 1716848] DHT: directory permissions are wiped out
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1716848
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-01 10:40:57
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/22814 (cluster/dht: Fix directory perms
during selfheal) merged (#4) on release-6 by N Balachandran
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 10:42:57 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 10:42:57 +0000
Subject: [Bugs] [Bug 1733881] [geo-rep]: gluster command not found while
setting up a non-root session
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733881
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-01 10:42:57
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23117 (geo-rep: Fix mount broker setup
issue) merged (#2) on release-5 by Kotresh HR
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 10:42:58 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 10:42:58 +0000
Subject: [Bugs] [Bug 1733880] [geo-rep]: gluster command not found while
setting up a non-root session
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733880
Bug 1733880 depends on bug 1733881, which changed state.
Bug 1733881 Summary: [geo-rep]: gluster command not found while setting up a non-root session
https://bugzilla.redhat.com/show_bug.cgi?id=1733881
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 10:45:47 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 10:45:47 +0000
Subject: [Bugs] [Bug 1731509] snapd crashes sometimes
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1731509
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-01 10:45:47
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23081 (features/snapview-server: obtain the
list of snapshots inside the lock) merged (#2) on release-6 by hari gowtham
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 11:39:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 11:39:45 +0000
Subject: [Bugs] [Bug 1734027] glusterd 6.4 memory leaks 2-3 GB per 24h (OOM)
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734027
Alex changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|CLOSED |NEW
Resolution|WONTFIX |---
Keywords| |Reopened
--- Comment #4 from Alex ---
GLUSTERD version affected: 6.4
Hi,
I've only mentioned 3.12 for the background, but if you read further you'll see
this is a bug on 6.4.
Thanks for reopening this.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 11:55:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 11:55:11 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
--- Comment #71 from Worker Ant ---
REVIEW: https://review.gluster.org/23139 (lcov: check for zerofill/discard fops
on arbiter) merged (#1) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 13:11:31 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 13:11:31 +0000
Subject: [Bugs] [Bug 1554286] Xattr not updated if increasing the retention
of a WORM/Retained file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1554286
Vishal Pandey changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 13:35:31 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 13:35:31 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23141
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 13:35:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 13:35:32 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
--- Comment #72 from Worker Ant ---
REVIEW: https://review.gluster.org/23141 (xdr: add code so we have more xdr
functions covered) posted (#1) for review on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 13:53:43 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 13:53:43 +0000
Subject: [Bugs] [Bug 1708603] [geo-rep]: Note section in document is
required for ignore_deletes true config option where it might delete a file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1708603
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23142
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 13:53:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 13:53:44 +0000
Subject: [Bugs] [Bug 1708603] [geo-rep]: Note section in document is
required for ignore_deletes true config option where it might delete a file
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1708603
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23142 (geo-rep: Note section is required for
ignore_deletes) posted (#1) for review on master by Shwetha K Acharya
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 17:00:51 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 17:00:51 +0000
Subject: [Bugs] [Bug 1736341] New: potential deadlock while processing
callbacks in gfapi
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
Bug ID: 1736341
Summary: potential deadlock while processing callbacks in gfapi
Product: GlusterFS
Version: 6
Hardware: All
OS: All
Status: NEW
Component: libgfapi
Severity: high
Assignee: bugs at gluster.org
Reporter: skoduri at redhat.com
QA Contact: bugs at gluster.org
CC: atumball at redhat.com, bugs at gluster.org, pasik at iki.fi
Depends On: 1733166
Blocks: 1733520
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1733166 +++
Description of problem:
While running parallel I/Os involving many files on nfs-ganesha mount, have hit
below deadlock in the nfs-ganesha process.
epoll thread:
....glfs_cbk_upcall_data->upcall_syncop_args_init->glfs_h_poll_cache_invalidation->glfs_h_find_handle->priv_glfs_active_subvol->glfs_lock
(waiting on lock)
I/O thread:
...glfs_h_stat->glfs_resolve_inode->__glfs_resolve_inode (at this point we
acquired glfs_lock) -> ...->glfs_refresh_inode_safe->syncop_lookup
To summarize-
I/O thread which acquired glfs_lock are waiting for epoll threads to receive
response where as epoll threads are waiting for I/O threads to release lock.
Similar issue was identified earlier (bug1693575).
There could be other issues at different layers depending on how client xlators
choose to process these callbacks.
The correct way of avoiding or fixing these issues is to re-design upcall model
which is to use different sockets for callback communication instead of using
same epoll threads. Raised github issue for that -
https://github.com/gluster/glusterfs/issues/697
Since it may take a while, raising this BZ to provide a workaround fix in gfapi
layer for now
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--- Additional comment from Worker Ant on 2019-07-25 10:09:58 UTC ---
REVIEW: https://review.gluster.org/23107 (gfapi: Fix deadlock while processing
upcall) posted (#1) for review on release-6 by soumya k
--- Additional comment from Worker Ant on 2019-07-25 10:16:57 UTC ---
REVIEW: https://review.gluster.org/23108 (gfapi: Fix deadlock while processing
upcall) posted (#1) for review on master by soumya k
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1733166
[Bug 1733166] potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1733520
[Bug 1733520] potential deadlock while processing callbacks in gfapi
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 17:00:51 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 17:00:51 +0000
Subject: [Bugs] [Bug 1733166] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733166
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1736341
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
[Bug 1736341] potential deadlock while processing callbacks in gfapi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 17:00:51 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 17:00:51 +0000
Subject: [Bugs] [Bug 1733520] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733520
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1736341
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
[Bug 1736341] potential deadlock while processing callbacks in gfapi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 17:01:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 17:01:13 +0000
Subject: [Bugs] [Bug 1736342] New: potential deadlock while processing
callbacks in gfapi
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736342
Bug ID: 1736342
Summary: potential deadlock while processing callbacks in gfapi
Product: GlusterFS
Version: 5
Hardware: All
OS: All
Status: NEW
Component: libgfapi
Severity: high
Assignee: bugs at gluster.org
Reporter: skoduri at redhat.com
QA Contact: bugs at gluster.org
CC: atumball at redhat.com, bugs at gluster.org, pasik at iki.fi
Depends On: 1733166
Blocks: 1733520, 1736341
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1733166 +++
Description of problem:
While running parallel I/Os involving many files on nfs-ganesha mount, have hit
below deadlock in the nfs-ganesha process.
epoll thread:
....glfs_cbk_upcall_data->upcall_syncop_args_init->glfs_h_poll_cache_invalidation->glfs_h_find_handle->priv_glfs_active_subvol->glfs_lock
(waiting on lock)
I/O thread:
...glfs_h_stat->glfs_resolve_inode->__glfs_resolve_inode (at this point we
acquired glfs_lock) -> ...->glfs_refresh_inode_safe->syncop_lookup
To summarize-
I/O thread which acquired glfs_lock are waiting for epoll threads to receive
response where as epoll threads are waiting for I/O threads to release lock.
Similar issue was identified earlier (bug1693575).
There could be other issues at different layers depending on how client xlators
choose to process these callbacks.
The correct way of avoiding or fixing these issues is to re-design upcall model
which is to use different sockets for callback communication instead of using
same epoll threads. Raised github issue for that -
https://github.com/gluster/glusterfs/issues/697
Since it may take a while, raising this BZ to provide a workaround fix in gfapi
layer for now
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--- Additional comment from Worker Ant on 2019-07-25 10:09:58 UTC ---
REVIEW: https://review.gluster.org/23107 (gfapi: Fix deadlock while processing
upcall) posted (#1) for review on release-6 by soumya k
--- Additional comment from Worker Ant on 2019-07-25 10:16:57 UTC ---
REVIEW: https://review.gluster.org/23108 (gfapi: Fix deadlock while processing
upcall) posted (#1) for review on master by soumya k
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1733166
[Bug 1733166] potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1733520
[Bug 1733520] potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
[Bug 1736341] potential deadlock while processing callbacks in gfapi
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 17:01:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 17:01:13 +0000
Subject: [Bugs] [Bug 1733166] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733166
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1736342
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1736342
[Bug 1736342] potential deadlock while processing callbacks in gfapi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 17:01:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 17:01:13 +0000
Subject: [Bugs] [Bug 1733520] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733520
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1736342
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1736342
[Bug 1736342] potential deadlock while processing callbacks in gfapi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 17:01:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 17:01:13 +0000
Subject: [Bugs] [Bug 1736341] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1736342
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1736342
[Bug 1736342] potential deadlock while processing callbacks in gfapi
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 17:01:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 17:01:34 +0000
Subject: [Bugs] [Bug 1736345] New: potential deadlock while processing
callbacks in gfapi
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736345
Bug ID: 1736345
Summary: potential deadlock while processing callbacks in gfapi
Product: GlusterFS
Version: 7
Hardware: All
OS: All
Status: NEW
Component: libgfapi
Severity: high
Assignee: bugs at gluster.org
Reporter: skoduri at redhat.com
QA Contact: bugs at gluster.org
CC: atumball at redhat.com, bugs at gluster.org, pasik at iki.fi
Depends On: 1733166
Blocks: 1733520, 1736341, 1736342
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1733166 +++
Description of problem:
While running parallel I/Os involving many files on nfs-ganesha mount, have hit
below deadlock in the nfs-ganesha process.
epoll thread:
....glfs_cbk_upcall_data->upcall_syncop_args_init->glfs_h_poll_cache_invalidation->glfs_h_find_handle->priv_glfs_active_subvol->glfs_lock
(waiting on lock)
I/O thread:
...glfs_h_stat->glfs_resolve_inode->__glfs_resolve_inode (at this point we
acquired glfs_lock) -> ...->glfs_refresh_inode_safe->syncop_lookup
To summarize-
I/O thread which acquired glfs_lock are waiting for epoll threads to receive
response where as epoll threads are waiting for I/O threads to release lock.
Similar issue was identified earlier (bug1693575).
There could be other issues at different layers depending on how client xlators
choose to process these callbacks.
The correct way of avoiding or fixing these issues is to re-design upcall model
which is to use different sockets for callback communication instead of using
same epoll threads. Raised github issue for that -
https://github.com/gluster/glusterfs/issues/697
Since it may take a while, raising this BZ to provide a workaround fix in gfapi
layer for now
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--- Additional comment from Worker Ant on 2019-07-25 10:09:58 UTC ---
REVIEW: https://review.gluster.org/23107 (gfapi: Fix deadlock while processing
upcall) posted (#1) for review on release-6 by soumya k
--- Additional comment from Worker Ant on 2019-07-25 10:16:57 UTC ---
REVIEW: https://review.gluster.org/23108 (gfapi: Fix deadlock while processing
upcall) posted (#1) for review on master by soumya k
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1733166
[Bug 1733166] potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1733520
[Bug 1733520] potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
[Bug 1736341] potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1736342
[Bug 1736342] potential deadlock while processing callbacks in gfapi
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 17:01:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 17:01:34 +0000
Subject: [Bugs] [Bug 1733166] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733166
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1736345
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1736345
[Bug 1736345] potential deadlock while processing callbacks in gfapi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 17:01:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 17:01:34 +0000
Subject: [Bugs] [Bug 1733520] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733520
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1736345
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1736345
[Bug 1736345] potential deadlock while processing callbacks in gfapi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Thu Aug 1 17:01:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 17:01:34 +0000
Subject: [Bugs] [Bug 1736341] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1736345
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1736345
[Bug 1736345] potential deadlock while processing callbacks in gfapi
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 17:01:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 17:01:34 +0000
Subject: [Bugs] [Bug 1736342] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736342
Soumya Koduri changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1736345
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1736345
[Bug 1736345] potential deadlock while processing callbacks in gfapi
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 18:06:30 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 18:06:30 +0000
Subject: [Bugs] [Bug 1736481] New: capture stat failure error while setting
the gfid
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736481
Bug ID: 1736481
Summary: capture stat failure error while setting the gfid
Product: GlusterFS
Version: 7
Status: NEW
Component: posix
Assignee: bugs at gluster.org
Reporter: rabhat at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
For create operation, after the entry is created, posix xlator tries to set the
gfid for that entry. While doing that, there are several places where setting
gfid can fail. While the failure is handled in all the cases, for one of the
failure cases, the errno is not captured. Capturing this might help in
debugging.
int
posix_gfid_set(xlator_t *this, const char *path, loc_t *loc, dict_t *xattr_req,
pid_t pid, int *op_errno)
{
uuid_t uuid_req;
uuid_t uuid_curr;
int ret = 0;
ssize_t size = 0;
struct stat stat = {
0,
};
*op_errno = 0;
if (!xattr_req) {
if (pid != GF_SERVER_PID_TRASH) {
gf_msg(this->name, GF_LOG_ERROR, EINVAL, P_MSG_INVALID_ARGUMENT,
"xattr_req is null");
*op_errno = EINVAL;
ret = -1;
}
goto out;
}
if (sys_lstat(path, &stat) != 0) {
ret = -1;
gf_msg(this->name, GF_LOG_ERROR, errno, P_MSG_LSTAT_FAILED,
"lstat on %s failed", path);
goto out;
}
HERE, errno is not captured.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 18:07:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 18:07:46 +0000
Subject: [Bugs] [Bug 1736482] New: capture stat failure error while setting
the gfid
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736482
Bug ID: 1736482
Summary: capture stat failure error while setting the gfid
Product: GlusterFS
Version: mainline
Status: NEW
Component: posix
Assignee: bugs at gluster.org
Reporter: rabhat at redhat.com
CC: bugs at gluster.org
Depends On: 1736481
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1736481 +++
Description of problem:
For create operation, after the entry is created, posix xlator tries to set the
gfid for that entry. While doing that, there are several places where setting
gfid can fail. While the failure is handled in all the cases, for one of the
failure cases, the errno is not captured. Capturing this might help in
debugging.
int
posix_gfid_set(xlator_t *this, const char *path, loc_t *loc, dict_t *xattr_req,
pid_t pid, int *op_errno)
{
uuid_t uuid_req;
uuid_t uuid_curr;
int ret = 0;
ssize_t size = 0;
struct stat stat = {
0,
};
*op_errno = 0;
if (!xattr_req) {
if (pid != GF_SERVER_PID_TRASH) {
gf_msg(this->name, GF_LOG_ERROR, EINVAL, P_MSG_INVALID_ARGUMENT,
"xattr_req is null");
*op_errno = EINVAL;
ret = -1;
}
goto out;
}
if (sys_lstat(path, &stat) != 0) {
ret = -1;
gf_msg(this->name, GF_LOG_ERROR, errno, P_MSG_LSTAT_FAILED,
"lstat on %s failed", path);
goto out;
}
HERE, errno is not captured.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1736481
[Bug 1736481] capture stat failure error while setting the gfid
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 18:07:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 18:07:46 +0000
Subject: [Bugs] [Bug 1736481] capture stat failure error while setting the
gfid
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736481
Raghavendra Bhat changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1736482
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1736482
[Bug 1736482] capture stat failure error while setting the gfid
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 18:56:05 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 18:56:05 +0000
Subject: [Bugs] [Bug 1736564] New: GlusterFS files missing randomly.
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736564
Bug ID: 1736564
Summary: GlusterFS files missing randomly.
Product: GlusterFS
Version: 6
Hardware: x86_64
OS: Linux
Status: NEW
Component: core
Severity: high
Assignee: bugs at gluster.org
Reporter: yexue2015 at u.northwestern.edu
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Some files were suddenly missing. Then after a couple of days, the missing
files appeared again (not been damaged).
Under one of my folders, there were two sub-folders and four files. At a point,
two files were missing and more files under those two sub-folders were missing
randomly. There should be 210 files under each of the sub-folders, but after
the missing occurs, there were 125 and 138 files left. I can no longer read the
files that are missing. However, after a few days. I found the missing files
were back.
Version-Release number of selected component (if applicable):
rpm -qa | grep glusterfs
glusterfs-6.1-1.el7.x86_64
glusterfs-client-xlators-6.1-1.el7.x86_64
glusterfs-libs-6.1-1.el7.x86_64
glusterfs-fuse-6.1-1.el7.x86_64
How reproducible:
The problem occurs quite randomly. It is not clear what triggers the missing
and how to reproduce the problem.
Steps to Reproduce:
1. NA
Actual results:
NA
Expected results:
NA
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 19:47:08 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 19:47:08 +0000
Subject: [Bugs] [Bug 1736482] capture stat failure error while setting the
gfid
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736482
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23144
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Thu Aug 1 19:47:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Thu, 01 Aug 2019 19:47:09 +0000
Subject: [Bugs] [Bug 1736482] capture stat failure error while setting the
gfid
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736482
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/23144 (storage/posix: set the op_errno to
proper errno during gfid set) posted (#1) for review on master by Raghavendra
Bhat
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 06:49:26 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 06:49:26 +0000
Subject: [Bugs] [Bug 1428103] Generate UUID on installation
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1428103
Vijay Bellur changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |CLOSED
Resolution|--- |NOTABUG
Flags|needinfo?(vbellur at redhat.co |
|m) |
Last Closed| |2019-08-02 06:49:26
--- Comment #5 from Vijay Bellur ---
Haven't heard back from Shreyas. Closing this bug.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Aug 2 06:50:35 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 06:50:35 +0000
Subject: [Bugs] [Bug 1597798] 'mv' of directory on encrypted volume fails
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1597798
Vijay Bellur changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(vbellur at redhat.co |
|m) |
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 06:51:06 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 06:51:06 +0000
Subject: [Bugs] [Bug 1648169] Fuse mount would crash if features.encryption
is on in the version from 3.13.0 to 4.1.5
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1648169
Vijay Bellur changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(vbellur at redhat.co |
|m) |
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 06:51:12 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 06:51:12 +0000
Subject: [Bugs] [Bug 1428081] cluster/dht: Bug fixes to cluster.min-free-disk
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1428081
Vijay Bellur changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(vbellur at redhat.co |
|m) |
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Aug 2 06:51:29 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 06:51:29 +0000
Subject: [Bugs] [Bug 1428075] debug/io-stats: Add errors to FOP samples
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1428075
Vijay Bellur changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(vbellur at redhat.co |
|m) |
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Aug 2 07:35:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 07:35:34 +0000
Subject: [Bugs] [Bug 1727081] Disperse volume : data corruption with
ftruncate data in 4+2 config
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1727081
Pranith Kumar K changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|CLOSED |ASSIGNED
Resolution|NEXTRELEASE |---
--- Comment #15 from Pranith Kumar K ---
Found one case which needs to be fixed.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Aug 2 07:35:36 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 07:35:36 +0000
Subject: [Bugs] [Bug 1732772] Disperse volume : data corruption with
ftruncate data in 4+2 config
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1732772
Bug 1732772 depends on bug 1727081, which changed state.
Bug 1727081 Summary: Disperse volume : data corruption with ftruncate data in 4+2 config
https://bugzilla.redhat.com/show_bug.cgi?id=1727081
What |Removed |Added
----------------------------------------------------------------------------
Status|CLOSED |ASSIGNED
Resolution|NEXTRELEASE |---
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Aug 2 07:35:37 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 07:35:37 +0000
Subject: [Bugs] [Bug 1732774] Disperse volume : data corruption with
ftruncate data in 4+2 config
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1732774
Bug 1732774 depends on bug 1727081, which changed state.
Bug 1727081 Summary: Disperse volume : data corruption with ftruncate data in 4+2 config
https://bugzilla.redhat.com/show_bug.cgi?id=1727081
What |Removed |Added
----------------------------------------------------------------------------
Status|CLOSED |ASSIGNED
Resolution|NEXTRELEASE |---
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Aug 2 07:35:39 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 07:35:39 +0000
Subject: [Bugs] [Bug 1732792] Disperse volume : data corruption with
ftruncate data in 4+2 config
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1732792
Bug 1732792 depends on bug 1727081, which changed state.
Bug 1727081 Summary: Disperse volume : data corruption with ftruncate data in 4+2 config
https://bugzilla.redhat.com/show_bug.cgi?id=1727081
What |Removed |Added
----------------------------------------------------------------------------
Status|CLOSED |ASSIGNED
Resolution|NEXTRELEASE |---
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Aug 2 07:38:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 07:38:11 +0000
Subject: [Bugs] [Bug 1727081] Disperse volume : data corruption with
ftruncate data in 4+2 config
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1727081
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23147
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Aug 2 07:38:12 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 07:38:12 +0000
Subject: [Bugs] [Bug 1727081] Disperse volume : data corruption with
ftruncate data in 4+2 config
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1727081
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #16 from Worker Ant ---
REVIEW: https://review.gluster.org/23147 (cluster/ec: Update lock->good_mask on
parent fop failure) posted (#1) for review on master by Pranith Kumar Karampuri
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Aug 2 07:56:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 07:56:17 +0000
Subject: [Bugs] [Bug 1736848] New: Execute the "gluster peer probe
invalid_hostname" thread deadlock or the glusterd process crashes
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736848
Bug ID: 1736848
Summary: Execute the "gluster peer probe invalid_hostname"
thread deadlock or the glusterd process crashes
Product: GlusterFS
Version: 6
Hardware: x86_64
OS: Linux
Status: NEW
Component: glusterd
Severity: urgent
Assignee: bugs at gluster.org
Reporter: xlfy555 at 163.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
When glusterd starts, typing the command "gluster peer probe invalid_hostname"
produces different results on different machines, with some machines glusterd
crashing and producing core files, and some machines glusterd processes with
many more child threads.
Version-Release number of selected component (if applicable):
release-6
How reproducible:
Steps to Reproduce:
Case 1
1.glusterd
2.gluster peer probe invalid_hostname
Case 2
1.glusterd
2.gluster peer probe invalid_hostname
3.gluster peer probe invalid_hostname
4.gluster peer probe invalid_hostname(Do it a few more times)
5.ps -aux|grep glusterd
6.gdb attach glusterd-pid
7.info thr (You'll see a lot of "__lll_lock_wait()" child threads)
Actual results:
Case 1
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib64/libthread_db.so.1".
Core was generated by `glusterd'.
Program terminated with signal 11, Segmentation fault.
#0 0x00007fef4bd208ff in rpc_clnt_handle_disconnect (conn=0x7fef34007890,
clnt=0x7fef34007860) at rpc-clnt.c:832
832 if (!conn->rpc_clnt->disabled && (conn->reconnect == NULL)) {
Missing separate debuginfos, use: debuginfo-install
bzip2-libs-1.0.6-13.el7.x86_64 elfutils-libelf-0.166-2.el7.x86_64
elfutils-libs-0.166-2.el7.x86_64 glibc-2.17-157.el7.x86_64
keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-37.el7_6.x86_64
libattr-2.4.46-12.el7.x86_64 libcap-2.22-8.el7.x86_64
libcom_err-1.42.9-9.el7.x86_64 libgcc-4.8.5-11.el7.x86_64
libselinux-2.5-6.el7.x86_64 libuuid-2.23.2-33.el7.x86_64
libxml2-2.9.1-6.el7_2.3.x86_64 openssl-libs-1.0.1e-60.el7.x86_64
pcre-8.32-15.el7_2.1.x86_64 systemd-libs-219-30.el7.x86_64
userspace-rcu-0.7.16-1.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64
zlib-1.2.7-17.el7.x86_64
(gdb) bt
#0 0x00007fef4bd208ff in rpc_clnt_handle_disconnect (conn=0x7fef34007890,
clnt=0x7fef34007860) at rpc-clnt.c:832
#1 rpc_clnt_notify (trans=0x7fef34007be0, mydata=0x7fef34007890,
event=, data=) at rpc-clnt.c:878
#2 0x00007fef4bd1d4e3 in rpc_transport_notify (this=,
event=event at entry=RPC_TRANSPORT_DISCONNECT, data=) at
rpc-transport.c:542
#3 0x00007fef3f3634d7 in socket_connect_error_cbk (opaque=0x7fef34007190) at
socket.c:3239
#4 0x00007fef4adb6dc5 in start_thread () from /usr/lib64/libpthread.so.0
#5 0x00007fef4a6fb73d in clone () from /usr/lib64/libc.so.6
(gdb) p conn->rpc_clnt
$1 = (struct rpc_clnt *) 0x14860
(gdb) p conn->rpc_clnt->disabled
Cannot access memory at address 0x149a0
Case 2
(gdb) info thr
Id Target Id Frame
16 Thread 0x7ff384f45700 (LWP 18259) "glfs_timer" 0x00007ff38c728bdd in
nanosleep () from /usr/lib64/libpthread.so.0
15 Thread 0x7ff384744700 (LWP 18260) "glfs_sigwait" 0x00007ff38c729101 in
sigwait () from /usr/lib64/libpthread.so.0
14 Thread 0x7ff383f43700 (LWP 18261) "glfs_memsweep" 0x00007ff38c02d66d in
nanosleep () from /usr/lib64/libc.so.6
13 Thread 0x7ff383742700 (LWP 18262) "glfs_sproc0" 0x00007ff38c725a82 in
pthread_cond_timedwait@@GLIBC_2.3.2 () from /usr/lib64/libpthread.so.0
12 Thread 0x7ff382f41700 (LWP 18263) "glfs_sproc1" 0x00007ff38c725a82 in
pthread_cond_timedwait@@GLIBC_2.3.2 () from /usr/lib64/libpthread.so.0
11 Thread 0x7ff382740700 (LWP 18264) "glusterd" 0x00007ff38c05dba3 in
select () from /usr/lib64/libc.so.6
10 Thread 0x7ff37f2c1700 (LWP 18290) "glfs_gdhooks" 0x00007ff38c7256d5 in
pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib64/libpthread.so.0
9 Thread 0x7ff37eac0700 (LWP 18291) "glfs_epoll000" 0x00007ff38c066d13 in
epoll_wait () from /usr/lib64/libc.so.6
8 Thread 0x7ff37d216700 (LWP 18306) "glfs_scleanup" 0x00007ff38c7281bd in
__lll_lock_wait () from /usr/lib64/libpthread.so.0
7 Thread 0x7ff37ca15700 (LWP 18307) "glfs_scleanup" 0x00007ff38c060bf9 in
syscall () from /usr/lib64/libc.so.6
6 Thread 0x7ff367fff700 (LWP 18315) "glfs_scleanup" 0x00007ff38c7281bd in
__lll_lock_wait () from /usr/lib64/libpthread.so.0
5 Thread 0x7ff3677fe700 (LWP 18323) "glfs_scleanup" 0x00007ff38c7281bd in
__lll_lock_wait () from /usr/lib64/libpthread.so.0
4 Thread 0x7ff366ffd700 (LWP 18331) "glfs_scleanup" 0x00007ff38c7281bd in
__lll_lock_wait () from /usr/lib64/libpthread.so.0
3 Thread 0x7ff3667fc700 (LWP 18339) "glfs_scleanup" 0x00007ff38c7281bd in
__lll_lock_wait () from /usr/lib64/libpthread.so.0
2 Thread 0x7ff365ffb700 (LWP 18347) "glfs_scleanup" 0x00007ff38c7281bd in
__lll_lock_wait () from /usr/lib64/libpthread.so.0
* 1 Thread 0x7ff38de22480 (LWP 18258) "glusterd" 0x00007ff38c722ef7 in
pthread_join () from /usr/lib64/libpthread.so.0
Expected results:
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 10:06:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 10:06:10 +0000
Subject: [Bugs] [Bug 1736345] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736345
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23150
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 10:06:12 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 10:06:12 +0000
Subject: [Bugs] [Bug 1736345] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736345
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/23150 (gfapi: Fix deadlock while processing
upcall) posted (#1) for review on release-7 by soumya k
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 10:07:12 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 10:07:12 +0000
Subject: [Bugs] [Bug 1736342] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736342
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23151
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 10:07:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 10:07:13 +0000
Subject: [Bugs] [Bug 1736342] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736342
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/23151 (gfapi: Fix deadlock while processing
upcall) posted (#1) for review on release-5 by soumya k
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 10:09:08 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 10:09:08 +0000
Subject: [Bugs] [Bug 1736341] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23107
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 10:09:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 10:09:09 +0000
Subject: [Bugs] [Bug 1736341] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/23107 (gfapi: Fix deadlock while processing
upcall) posted (#4) for review on release-6 by soumya k
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 13:04:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 13:04:32 +0000
Subject: [Bugs] [Bug 1543996] truncates read-only files on copy
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1543996
Kaleb KEITHLEY changed:
What |Removed |Added
----------------------------------------------------------------------------
Version|mainline |6
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Aug 2 13:08:28 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 13:08:28 +0000
Subject: [Bugs] [Bug 1543996] truncates read-only files on copy
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1543996
Kaleb KEITHLEY changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1735480
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1735480
[Bug 1735480] git clone fails on gluster volumes exported via nfs-ganesha
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Aug 2 14:13:43 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 14:13:43 +0000
Subject: [Bugs] [Bug 1733166] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733166
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-02 14:13:43
--- Comment #3 from Worker Ant ---
REVIEW: https://review.gluster.org/23108 (gfapi: Fix deadlock while processing
upcall) merged (#5) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Aug 2 14:13:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 14:13:44 +0000
Subject: [Bugs] [Bug 1733520] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733520
Bug 1733520 depends on bug 1733166, which changed state.
Bug 1733166 Summary: potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1733166
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Aug 2 14:13:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 14:13:45 +0000
Subject: [Bugs] [Bug 1736341] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
Bug 1736341 depends on bug 1733166, which changed state.
Bug 1733166 Summary: potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1733166
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 14:13:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 14:13:45 +0000
Subject: [Bugs] [Bug 1736342] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736342
Bug 1736342 depends on bug 1733166, which changed state.
Bug 1733166 Summary: potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1733166
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 14:13:46 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 14:13:46 +0000
Subject: [Bugs] [Bug 1736345] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736345
Bug 1736345 depends on bug 1733166, which changed state.
Bug 1733166 Summary: potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1733166
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 14:26:16 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 14:26:16 +0000
Subject: [Bugs] [Bug 1734738] Unable to create geo-rep session on a non-root
setup.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734738
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-02 14:26:16
--- Comment #3 from Worker Ant ---
REVIEW: https://review.gluster.org/23136 (geo-rep: Fix mount broker setup
issue) merged (#3) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Aug 2 14:27:15 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 14:27:15 +0000
Subject: [Bugs] [Bug 1717824] Fencing: Added the tcmu-runner ALUA feature
support but after one of node is rebooted the glfs_file_lock() get stucked
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1717824
--- Comment #21 from Worker Ant ---
REVIEW: https://review.gluster.org/23088 (locks/fencing: Address hang while
lock preemption) merged (#4) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Aug 2 19:19:55 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 19:19:55 +0000
Subject: [Bugs] [Bug 1737141] New: read() returns more than file size when
using direct I/O
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737141
Bug ID: 1737141
Summary: read() returns more than file size when using direct
I/O
Product: GlusterFS
Version: 6
Status: NEW
Component: fuse
Severity: high
Assignee: bugs at gluster.org
Reporter: nsoffer at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
When using direct I/O, reading from a file returns more data, padding the file
data with zeroes.
Here is an example.
## On a host mounting gluster using fuse
$ pwd
/rhev/data-center/mnt/glusterSD/voodoo4.tlv.redhat.com:_gv0/de566475-5b67-4987-abf3-3dc98083b44c/dom_md
$ mount | grep glusterfs
voodoo4.tlv.redhat.com:/gv0 on
/rhev/data-center/mnt/glusterSD/voodoo4.tlv.redhat.com:_gv0 type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
$ stat metadata
File: metadata
Size: 501 Blocks: 1 IO Block: 131072 regular file
Device: 31h/49d Inode: 13313776956941938127 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 36/ vdsm) Gid: ( 36/ kvm)
Context: system_u:object_r:fusefs_t:s0
Access: 2019-08-01 22:21:49.186381528 +0300
Modify: 2019-08-01 22:21:49.427404135 +0300
Change: 2019-08-01 22:21:49.969739575 +0300
Birth: -
$ cat metadata
ALIGNMENT=1048576
BLOCK_SIZE=4096
CLASS=Data
DESCRIPTION=gv0
IOOPTIMEOUTSEC=10
LEASERETRIES=3
LEASETIMESEC=60
LOCKPOLICY=
LOCKRENEWALINTERVALSEC=5
MASTER_VERSION=1
POOL_DESCRIPTION=4k-gluster
POOL_DOMAINS=de566475-5b67-4987-abf3-3dc98083b44c:Active
POOL_SPM_ID=-1
POOL_SPM_LVER=-1
POOL_UUID=44cfb532-3144-48bd-a08c-83065a5a1032
REMOTE_PATH=voodoo4.tlv.redhat.com:/gv0
ROLE=Master
SDUUID=de566475-5b67-4987-abf3-3dc98083b44c
TYPE=GLUSTERFS
VERSION=5
_SHA_CKSUM=3d1cb836f4c93679fc5a4e7218425afe473e3cfa
$ dd if=metadata bs=4096 count=1 of=/dev/null
0+1 records in
0+1 records out
501 bytes copied, 0.000340298 s, 1.5 MB/s
$ dd if=metadata bs=4096 count=1 of=/dev/null iflag=direct
1+0 records in
1+0 records out
4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00398529 s, 1.0 MB/s
Checking the copied data, the actual content of the file is padded
with zeros to 4096 bytes.
## On the one of the gluster nodes
$ pwd
/export/vdo0/brick/de566475-5b67-4987-abf3-3dc98083b44c/dom_md
$ stat metadata
File: metadata
Size: 501 Blocks: 16 IO Block: 4096 regular file
Device: fd02h/64770d Inode: 149 Links: 2
Access: (0644/-rw-r--r--) Uid: ( 36/ UNKNOWN) Gid: ( 36/ kvm)
Context: system_u:object_r:usr_t:s0
Access: 2019-08-01 22:21:50.380425478 +0300
Modify: 2019-08-01 22:21:49.427397589 +0300
Change: 2019-08-01 22:21:50.374425302 +0300
Birth: -
$ dd if=metadata bs=4096 count=1 of=/dev/null
0+1 records in
0+1 records out
501 bytes copied, 0.000991636 s, 505 kB/s
$ dd if=metadata bs=4096 count=1 of=/dev/null iflag=direct
0+1 records in
0+1 records out
501 bytes copied, 0.0011381 s, 440 kB/s
This proves that the issue is in gluster.
# gluster volume info gv0
Volume Name: gv0
Type: Replicate
Volume ID: cbc5a2ad-7246-42fc-a78f-70175fb7bf22
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: voodoo4.tlv.redhat.com:/export/vdo0/brick
Brick2: voodoo5.tlv.redhat.com:/export/vdo0/brick
Brick3: voodoo8.tlv.redhat.com:/export/vdo0/brick (arbiter)
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: disable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
$ xfs_info /export/vdo0
meta-data=/dev/mapper/vdo0 isize=512 agcount=4, agsize=6553600 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=0
data = bsize=4096 blocks=26214400, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=12800, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Version-Release number of selected component (if applicable):
Server:
$ rpm -qa | grep glusterfs
glusterfs-libs-6.4-1.fc29.x86_64
glusterfs-api-6.4-1.fc29.x86_64
glusterfs-client-xlators-6.4-1.fc29.x86_64
glusterfs-fuse-6.4-1.fc29.x86_64
glusterfs-6.4-1.fc29.x86_64
glusterfs-cli-6.4-1.fc29.x86_64
glusterfs-server-6.4-1.fc29.x86_64
Client:
$ rpm -qa | grep glusterfs
glusterfs-client-xlators-6.4-1.fc29.x86_64
glusterfs-6.4-1.fc29.x86_64
glusterfs-rdma-6.4-1.fc29.x86_64
glusterfs-cli-6.4-1.fc29.x86_64
glusterfs-libs-6.4-1.fc29.x86_64
glusterfs-fuse-6.4-1.fc29.x86_64
glusterfs-api-6.4-1.fc29.x86_64
How reproducible:
Always.
Steps to Reproduce:
1. Provision gluster volume over vdo (did not check without vdo)
2. Create a file of 501 bytes
3. Read the file using direct I/O
Actual results:
read() returns 4096 bytes, padding the file data with zeroes
Expected results:
read() returns actual file data (501 bytes)
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 19:20:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 19:20:19 +0000
Subject: [Bugs] [Bug 1737141] read() returns more than file size when using
direct I/O
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737141
Nir Soffer changed:
What |Removed |Added
----------------------------------------------------------------------------
Dependent Products| |Red Hat Enterprise
| |Virtualization Manager
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 19:21:20 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 19:21:20 +0000
Subject: [Bugs] [Bug 1737141] read() returns more than file size when using
direct I/O
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737141
Nir Soffer changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |teigland at redhat.com
Flags| |needinfo?(teigland at redhat.c
| |om)
--- Comment #1 from Nir Soffer ---
David, do you think this can affect sanlock?
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Fri Aug 2 19:25:02 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 02 Aug 2019 19:25:02 +0000
Subject: [Bugs] [Bug 1737141] read() returns more than file size when using
direct I/O
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737141
Nir Soffer changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |kwolf at redhat.com
Flags| |needinfo?(kwolf at redhat.com)
--- Comment #2 from Nir Soffer ---
Kevin, do you think this can affect qemu/qemu-img?
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Sun Aug 4 03:39:50 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sun, 04 Aug 2019 03:39:50 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #738 from Worker Ant ---
REVIEW: https://review.gluster.org/23118 (tests: introduce BRICK_MUX_BAD_TESTS
variable) merged (#4) on master by Atin Mukherjee
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Sun Aug 4 07:09:48 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sun, 04 Aug 2019 07:09:48 +0000
Subject: [Bugs] [Bug 1736482] capture stat failure error while setting the
gfid
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736482
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-04 07:09:48
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23144 (storage/posix: set the op_errno to
proper errno during gfid set) merged (#2) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Sun Aug 4 07:11:26 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sun, 04 Aug 2019 07:11:26 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
--- Comment #73 from Worker Ant ---
REVIEW: https://review.gluster.org/23141 (xdr: add code so we have more xdr
functions covered) merged (#2) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Sun Aug 4 13:03:30 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sun, 04 Aug 2019 13:03:30 +0000
Subject: [Bugs] [Bug 1717824] Fencing: Added the tcmu-runner ALUA feature
support but after one of node is rebooted the glfs_file_lock() get stucked
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1717824
Xiubo Li changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags| |needinfo?(spalai at redhat.com
| |)
--- Comment #22 from Xiubo Li ---
@Susant,
Since the Fencing patch has been into the release 6, so this fixing followed
should be backported, right ?
Thanks.
BRs
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Sun Aug 4 14:01:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sun, 04 Aug 2019 14:01:11 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23153
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Sun Aug 4 14:01:13 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sun, 04 Aug 2019 14:01:13 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #739 from Worker Ant ---
REVIEW: https://review.gluster.org/23153 ([WIP]options.h: format OPTION_INIT
similar to RECONF_INIT) posted (#1) for review on master by Yaniv Kaul
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Sun Aug 4 16:55:27 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sun, 04 Aug 2019 16:55:27 +0000
Subject: [Bugs] [Bug 1732774] Disperse volume : data corruption with
ftruncate data in 4+2 config
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1732774
Yaniv Kaul changed:
What |Removed |Added
----------------------------------------------------------------------------
Priority|unspecified |urgent
Severity|unspecified |urgent
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Sun Aug 4 16:55:37 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sun, 04 Aug 2019 16:55:37 +0000
Subject: [Bugs] [Bug 1732792] Disperse volume : data corruption with
ftruncate data in 4+2 config
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1732792
Yaniv Kaul changed:
What |Removed |Added
----------------------------------------------------------------------------
Priority|unspecified |urgent
Severity|unspecified |urgent
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Sun Aug 4 17:00:31 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Sun, 04 Aug 2019 17:00:31 +0000
Subject: [Bugs] [Bug 1732772] Disperse volume : data corruption with
ftruncate data in 4+2 config
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1732772
Yaniv Kaul changed:
What |Removed |Added
----------------------------------------------------------------------------
Priority|unspecified |urgent
Severity|unspecified |urgent
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 03:07:25 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 03:07:25 +0000
Subject: [Bugs] [Bug 1737288] New: nfs client gets bad ctime for copied file
which is on glusterfs disperse volume with ctime on
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737288
Bug ID: 1737288
Summary: nfs client gets bad ctime for copied file which is on
glusterfs disperse volume with ctime on
Product: GlusterFS
Version: mainline
Status: NEW
Component: ctime
Assignee: bugs at gluster.org
Reporter: kinglongmee at gmail.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
I have a 4+2 disperse volume with ctime on, and export a dir from nfs-ganesha,
storage.ctime: on
features.utime: on
When I copy a local file to nfs client, stat shows bad ctime for the file.
# stat /mnt/nfs/test*
File: ?/mnt/nfs/test1.sh?
Size: 166 Blocks: 4 IO Block: 1048576 regular file
Device: 27h/39d Inode: 10744358902712050257 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2019-08-05 09:49:00.000000000 +0800
Modify: 2019-08-05 09:49:00.000000000 +0800
Change: 2061-07-23 21:54:08.000000000 +0800
Birth: -
File: ?/mnt/nfs/test2.sh?
Size: 214 Blocks: 4 IO Block: 1048576 regular file
Device: 27h/39d Inode: 12073556847735387788 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2019-08-05 09:49:00.000000000 +0800
Modify: 2019-08-05 09:49:00.000000000 +0800
Change: 2061-07-23 21:54:08.000000000 +0800
Birth: -
# ps a
342188 pts/0 D+ 0:00 cp -i test1.sh test2.sh /mnt/nfs/
# gdb glusterfsd
(gdb) p *stbuf
$1 = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0,
ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0,
ia_atime = 174138658, ia_mtime = 2889352448, ia_ctime = 0, ia_btime = 0,
ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0,
ia_attributes = 0, ia_attributes_mask = 0,
ia_gfid = '\000' , ia_type = IA_INVAL, ia_prot = {
suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {
read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {
read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {
read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}
It is caused by nfs client create the copied file as EXCLUSIVE mode which
set a verifier, the verifier is set to file's atime and mtime.
nfs client set the verifier as,
if (flags & O_EXCL) {
data->arg.create.createmode = NFS3_CREATE_EXCLUSIVE;
data->arg.create.verifier[0] = cpu_to_be32(jiffies);
data->arg.create.verifier[1] = cpu_to_be32(current->pid);
}
the verifier[0] is set to file's atime, and verifier[1] is set to mtime.
But utime at storage/posix set the mtime to ctime too at setattr and set ctime
to a earlier time is not allowed.
/* Earlier, mdata was updated only if the existing time is less
* than the time to be updated. This would fail the scenarios
* where mtime can be set to any time using the syscall. Hence
* just updating without comparison. But the ctime is not
* allowed to changed to older date.
*/
The following codes is used to find those PIDs which may cause a bad ctime for
a copied file.
==========================================================================
#include
#include
int swap_endian(int val){
val = ((val << 8)&0xFF00FF00) | ((val >> 8)&0x00FF00FF);
return (val << 16)|(val >> 16);
}
// time of 2020/01/01 0:0:0
#define TO2020 1577808000
int main(int argc, char **argv)
{
unsigned int i = 0, val = 0;
for (i = 0; i < 500000; i++) {
val = swap_endian(i);
if (val > TO2020)
printf("%u %u\n", i, val);
}
return 0;
}
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 03:09:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 03:09:23 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #740 from Worker Ant ---
REVIEW: https://review.gluster.org/23130 (multiple files: reduce minor work
under RCU_READ_LOCK) merged (#4) on master by Atin Mukherjee
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 03:12:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 03:12:34 +0000
Subject: [Bugs] [Bug 1708929] Add more test coverage for shd mux
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1708929
--- Comment #8 from Worker Ant ---
REVIEW: https://review.gluster.org/23135 (tests/shd: Break down shd mux tests
into multiple .t file) merged (#2) on master by Atin Mukherjee
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 03:17:59 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 03:17:59 +0000
Subject: [Bugs] [Bug 1737288] nfs client gets bad ctime for copied file
which is on glusterfs disperse volume with ctime on
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737288
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23154
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 03:18:00 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 03:18:00 +0000
Subject: [Bugs] [Bug 1737288] nfs client gets bad ctime for copied file
which is on glusterfs disperse volume with ctime on
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737288
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/23154 (features/utime: always update ctime
at setattr) posted (#1) for review on master by Kinglong Mee
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 03:20:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 03:20:44 +0000
Subject: [Bugs] [Bug 1737291] New: features/locks: avoid use after freed of
frame for blocked lock
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737291
Bug ID: 1737291
Summary: features/locks: avoid use after freed of frame for
blocked lock
Product: GlusterFS
Version: mainline
Status: NEW
Component: locks
Assignee: bugs at gluster.org
Reporter: kinglongmee at gmail.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
The fop contains blocked lock may use freed frame info when other unlock fop
has unwind the blocked lock.
Because the blocked lock is added to block list in inode lock(or other lock),
after that, when out of the inode lock, the fop contains the blocked lock
should not use it.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 03:22:58 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 03:22:58 +0000
Subject: [Bugs] [Bug 1737291] features/locks: avoid use after freed of frame
for blocked lock
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737291
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23155
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 03:23:00 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 03:23:00 +0000
Subject: [Bugs] [Bug 1737291] features/locks: avoid use after freed of frame
for blocked lock
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737291
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/23155 (features/locks: avoid use after freed
of frame for blocked lock) posted (#1) for review on master by Kinglong Mee
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 05:06:51 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 05:06:51 +0000
Subject: [Bugs] [Bug 1736342] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736342
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-05 05:06:51
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23151 (gfapi: Fix deadlock while processing
upcall) merged (#1) on release-5 by soumya k
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 05:06:51 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 05:06:51 +0000
Subject: [Bugs] [Bug 1733520] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733520
Bug 1733520 depends on bug 1736342, which changed state.
Bug 1736342 Summary: potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1736342
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 05:06:52 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 05:06:52 +0000
Subject: [Bugs] [Bug 1736341] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
Bug 1736341 depends on bug 1736342, which changed state.
Bug 1736342 Summary: potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1736342
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 05:07:30 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 05:07:30 +0000
Subject: [Bugs] [Bug 1736341] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-05 05:07:30
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23107 (gfapi: Fix deadlock while processing
upcall) merged (#4) on release-6 by soumya k
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 05:07:30 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 05:07:30 +0000
Subject: [Bugs] [Bug 1733520] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733520
Bug 1733520 depends on bug 1736341, which changed state.
Bug 1736341 Summary: potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 05:29:39 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 05:29:39 +0000
Subject: [Bugs] [Bug 1736564] GlusterFS files missing randomly.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736564
Amar Tumballi changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |atumball at redhat.com
--- Comment #1 from Amar Tumballi ---
Can you try below command see if that helps?
'gluster volume set parallel-readdir disable'
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 05:30:18 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 05:30:18 +0000
Subject: [Bugs] [Bug 1736848] Execute the "gluster peer probe
invalid_hostname" thread deadlock or the glusterd process crashes
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736848
Amar Tumballi changed:
What |Removed |Added
----------------------------------------------------------------------------
Priority|unspecified |high
CC| |amukherj at redhat.com,
| |atumball at redhat.com,
| |moagrawa at redhat.com
Assignee|bugs at gluster.org |srakonde at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 05:33:57 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 05:33:57 +0000
Subject: [Bugs] [Bug 1737141] read() returns more than file size when using
direct I/O
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737141
Amar Tumballi changed:
What |Removed |Added
----------------------------------------------------------------------------
Priority|unspecified |high
CC| |atumball at redhat.com,
| |csaba at redhat.com,
| |kdhananj at redhat.com,
| |khiremat at redhat.com,
| |nbalacha at redhat.com,
| |pkarampu at redhat.com,
| |rabhat at redhat.com,
| |rgowdapp at redhat.com,
| |rkavunga at redhat.com
--- Comment #3 from Amar Tumballi ---
@Nir, thanks for the report. We will look into this.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 05:50:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 05:50:10 +0000
Subject: [Bugs] [Bug 1737311] New: (glusterfs-6.5) - GlusterFS 6.5 tracker
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737311
Bug ID: 1737311
Summary: (glusterfs-6.5) - GlusterFS 6.5 tracker
Product: GlusterFS
Version: 6
Status: NEW
Component: core
Assignee: bugs at gluster.org
Reporter: hgowtham at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Tracker bug for 6.5
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 05:52:33 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 05:52:33 +0000
Subject: [Bugs] [Bug 1737311] (glusterfs-6.5) - GlusterFS 6.5 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737311
hari gowtham changed:
What |Removed |Added
----------------------------------------------------------------------------
Keywords| |Tracking
Depends On| |1736341, 1731509, 1730545,
| |1716848
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1716848
[Bug 1716848] DHT: directory permissions are wiped out
https://bugzilla.redhat.com/show_bug.cgi?id=1730545
[Bug 1730545] gluster v geo-rep status command timing out
https://bugzilla.redhat.com/show_bug.cgi?id=1731509
[Bug 1731509] snapd crashes sometimes
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
[Bug 1736341] potential deadlock while processing callbacks in gfapi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 05:52:33 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 05:52:33 +0000
Subject: [Bugs] [Bug 1716848] DHT: directory permissions are wiped out
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1716848
hari gowtham changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1737311
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737311
[Bug 1737311] (glusterfs-6.5) - GlusterFS 6.5 tracker
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 05:52:33 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 05:52:33 +0000
Subject: [Bugs] [Bug 1730545] gluster v geo-rep status command timing out
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1730545
hari gowtham changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1737311
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737311
[Bug 1737311] (glusterfs-6.5) - GlusterFS 6.5 tracker
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 05:52:33 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 05:52:33 +0000
Subject: [Bugs] [Bug 1731509] snapd crashes sometimes
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1731509
hari gowtham changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1737311
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737311
[Bug 1737311] (glusterfs-6.5) - GlusterFS 6.5 tracker
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 05:52:33 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 05:52:33 +0000
Subject: [Bugs] [Bug 1736341] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
hari gowtham changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1737311
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737311
[Bug 1737311] (glusterfs-6.5) - GlusterFS 6.5 tracker
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 05:56:37 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 05:56:37 +0000
Subject: [Bugs] [Bug 1737313] New: (glusterfs-5.9) - GlusterFS 5.9 tracker
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737313
Bug ID: 1737313
Summary: (glusterfs-5.9) - GlusterFS 5.9 tracker
Product: GlusterFS
Version: 5
Status: NEW
Component: core
Assignee: bugs at gluster.org
Reporter: hgowtham at redhat.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
Tracker bug for 5.9
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 06:35:21 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 06:35:21 +0000
Subject: [Bugs] [Bug 1733520] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733520
Sunil Kumar Acharya changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |sheggodu at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 06:43:14 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 06:43:14 +0000
Subject: [Bugs] [Bug 1733520] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733520
Sunil Kumar Acharya changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |MODIFIED
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 06:49:01 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 06:49:01 -0000
Subject: [Bugs] [Bug 1728766] Volume start failed when shd is down in one of
the node in cluster
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1728766
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-05 06:48:55
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23007 (glusterd/shd: Return null proc if
process is not running.) merged (#5) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 06:49:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 06:49:19 +0000
Subject: [Bugs] [Bug 1727256] Directory pending heal in heal info output
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1727256
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-05 06:49:19
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23005 (graph/shd: attach volfile even if
ctx->active is NULL) merged (#10) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 06:50:43 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 06:50:43 +0000
Subject: [Bugs] [Bug 1737141] read() returns more than file size when using
direct I/O
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737141
Krutika Dhananjay changed:
What |Removed |Added
----------------------------------------------------------------------------
Component|fuse |sharding
QA Contact| |bugs at gluster.org
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 06:51:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 06:51:11 +0000
Subject: [Bugs] [Bug 1737141] read() returns more than file size when using
direct I/O
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737141
Krutika Dhananjay changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |kdhananj at redhat.com
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 07:08:37 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 07:08:37 +0000
Subject: [Bugs] [Bug 1529842] Read-only listxattr syscalls seem to translate
to non-read-only FOPs
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1529842
Aravinda VK changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-08-05 07:08:37
--- Comment #3 from Aravinda VK ---
(In reply to nh2 from comment #2)
> Did you use the same version as I was using, 3.12.3?
>
> Unfortunately I won't be able to put time into re-reproducing this, as we
> switched to Ceph a year ago.
Thanks for the update. Closing this bug since the issue is not reproducible in
the latest version, as mentioned in Comment 1. Please reopen if found again.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 07:08:39 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 07:08:39 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23156
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 07:08:39 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 07:08:39 +0000
Subject: [Bugs] [Bug 1193929] GlusterFS can be improved
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1193929
--- Comment #741 from Worker Ant ---
REVIEW: https://review.gluster.org/23156 (index.{c|h}: minor changes) posted
(#1) for review on master by Yaniv Kaul
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 08:30:06 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 08:30:06 +0000
Subject: [Bugs] [Bug 1443027] Accessing file from aux mount is not
triggering afr selfheals.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1443027
Ravishankar N changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Resolution|--- |WONTFIX
Last Closed| |2019-08-05 08:30:06
--- Comment #3 from Ravishankar N ---
I'm not planning to work on this bug any time soon. In the interest of reducing
bug backlog count, I am closing it. Please feel free to re-open as needed.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 08:32:55 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 08:32:55 +0000
Subject: [Bugs] [Bug 1682925] Gluster volumes never heal during oVirt
4.2->4.3 upgrade
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1682925
Ravishankar N changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Resolution|--- |INSUFFICIENT_DATA
Last Closed| |2019-08-05 08:32:55
--- Comment #9 from Ravishankar N ---
I'm closing this bug as there is not much information on what the problem is.
Please feel free to re-open with the relevant details/ reproducer steps if
issue occurs again.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 09:16:16 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 09:16:16 +0000
Subject: [Bugs] [Bug 1737141] read() returns more than file size when using
direct I/O
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737141
Kevin Wolf changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags|needinfo?(teigland at redhat.c |
|om) |
|needinfo?(kwolf at redhat.com) |
--- Comment #4 from Kevin Wolf ---
(In reply to Nir Soffer from comment #2)
> Kevin, do you think this can affect qemu/qemu-img?
This is not a problem for QEMU as long as the file size is correct. If gluster
didn't do the zero padding, QEMU would do it internally.
In fact, fixing this in gluster may break the case of unaligned image sizes
with QEMU because the image size is rounded up to sector (512 byte) granularity
and the gluster driver turns short reads into errors. This would actually
affect non-O_DIRECT, too, which already seems to behave this way, so can you
just give this a quick test?
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 10:00:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 10:00:38 +0000
Subject: [Bugs] [Bug 1693184] A brick process(glusterfsd) died with 'memory
violation'
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693184
Ravishankar N changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Resolution|--- |INSUFFICIENT_DATA
Flags|needinfo?(knjeong at growthsof |
|t.co.kr) |
Last Closed| |2019-08-05 10:00:38
--- Comment #2 from Ravishankar N ---
Hi Jeong, I'm closing this bug as gluster 3.6 was EOL'd long back. Please feel
free to re-open the bug if issue persists in any of the current supported
releases.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 10:04:43 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 10:04:43 +0000
Subject: [Bugs] [Bug 1727430] CPU Spike casue files unavailable
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1727430
Ravishankar N changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |CLOSED
Resolution|--- |INSUFFICIENT_DATA
Last Closed| |2019-08-05 10:04:43
--- Comment #3 from Ravishankar N ---
Hi, I'm closing this bug since I haven't heard from you. Please feel free to
re-open with the information I requested/ steps to reproduce.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 11:11:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 11:11:11 +0000
Subject: [Bugs] [Bug 1414608] Weird directory appear when rmdir the
directory in disk full condition
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1414608
Ravishankar N changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
Resolution|--- |CURRENTRELEASE
Last Closed| |2019-08-05 11:11:11
--- Comment #6 from Ravishankar N ---
Disk full scenarios can cause problems ranging from ENOENT during creates to
ENOTEMPTY during rmdirs to heal not progressing due to lack of gluster xattrs.
Recent versions of gluster have 'storage.reserve' volume option in posix xlator
to reserve space for rebalance, heals etc. That should mitigate this to some
extent. But even that is not entirely race free as it checks and updates free
space only once in 5 seconds. I'm going ahead and closing this bug as
CURRENTRELEASE.
George, please feel free to re-open the bug if storage.reserve doesn't solve
your use case or if you have other ideas to solve this in a more robust way.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 11:31:49 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 11:31:49 +0000
Subject: [Bugs] [Bug 1733880] [geo-rep]: gluster command not found while
setting up a non-root session
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733880
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-05 11:31:49
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23116 (geo-rep: Fix mount broker setup
issue) merged (#3) on release-6 by hari gowtham
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 12:55:26 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 12:55:26 +0000
Subject: [Bugs] [Bug 1730433] Gluster release 6 build errors on ppc64le
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1730433
Kaleb KEITHLEY changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |CLOSED
CC| |kkeithle at redhat.com
Resolution|--- |WORKSFORME
Last Closed| |2019-08-05 12:55:26
--- Comment #3 from Kaleb KEITHLEY ---
openssl-devel is in RHEL base (rhel-7-server-rpms repo) No need to build from
source.
You can get userspace-rcu(-devel) from EPEL or the CentOS Storage SIG. (yes,
even for ppc64le, see
http://mirror.centos.org/altarch/7.6.1810/storage/ppc64le/gluster-6/) But
build from source if you wish.
Closing as WORKSFORME. Reopen if necessary.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 12:59:58 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 12:59:58 +0000
Subject: [Bugs] [Bug 1663337] Gluster documentation on quorum-reads option
is incorrect
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1663337
Ravishankar N changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
Version|4.1 |mainline
--- Comment #1 from Ravishankar N ---
I have sent PR https://github.com/gluster/glusterdocs/pull/493 to update the
documentation.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 13:35:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 13:35:45 +0000
Subject: [Bugs] [Bug 1737484] geo-rep syncing significantly behind and also
only one of the directories are synced with tracebacks seen
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737484
Aravinda VK changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |avishwan at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 13:44:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 13:44:11 +0000
Subject: [Bugs] [Bug 1737484] geo-rep syncing significantly behind and also
only one of the directories are synced with tracebacks seen
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737484
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23158
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 13:44:12 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 13:44:12 +0000
Subject: [Bugs] [Bug 1737484] geo-rep syncing significantly behind and also
only one of the directories are synced with tracebacks seen
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737484
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/23158 (geo-rep: Fix Config Get Race) posted
(#1) for review on master by Aravinda VK
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 15:08:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 15:08:32 +0000
Subject: [Bugs] [Bug 1737141] read() returns more than file size when using
direct I/O
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737141
--- Comment #5 from David Teigland ---
(In reply to Nir Soffer from comment #1)
> David, do you think this can affect sanlock?
I don't think so. sanlock doesn't use any space that it didn't first write to
initialize.
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Aug 5 17:34:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 17:34:09 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23159
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 17:34:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 17:34:10 +0000
Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1693692
--- Comment #74 from Worker Ant ---
REVIEW: https://review.gluster.org/23159 (tests/line-coverage: more commands
added to cover xdrs) posted (#1) for review on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Mon Aug 5 18:00:45 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 05 Aug 2019 18:00:45 +0000
Subject: [Bugs] [Bug 1734423] interrupts leak memory
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734423
Sunil Kumar Acharya changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |MODIFIED
CC| |sheggodu at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 01:52:10 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 01:52:10 +0000
Subject: [Bugs] [Bug 1734423] interrupts leak memory
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734423
Sunil Kumar Acharya changed:
What |Removed |Added
----------------------------------------------------------------------------
Fixed In Version| |glusterfs-6.0-11
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 01:52:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 01:52:11 +0000
Subject: [Bugs] [Bug 1733520] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733520
Sunil Kumar Acharya changed:
What |Removed |Added
----------------------------------------------------------------------------
Fixed In Version| |glusterfs-6.0-11
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 01:55:20 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 01:55:20 +0000
Subject: [Bugs] [Bug 1733520] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733520
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|MODIFIED |ON_QA
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 01:55:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 01:55:23 +0000
Subject: [Bugs] [Bug 1734423] interrupts leak memory
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734423
errata-xmlrpc changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|MODIFIED |ON_QA
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 03:17:43 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 03:17:43 +0000
Subject: [Bugs] [Bug 1737676] New: Upgrading a Gluster node fails when user
edited glusterd.vol file exists
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737676
Bug ID: 1737676
Summary: Upgrading a Gluster node fails when user edited
glusterd.vol file exists
Product: GlusterFS
Version: mainline
Status: NEW
Component: glusterd
Severity: high
Assignee: bugs at gluster.org
Reporter: amukherj at redhat.com
CC: amukherj at redhat.com, bmekala at redhat.com,
bugs at gluster.org, rhs-bugs at redhat.com,
rtalur at redhat.com, sankarshan at redhat.com,
storage-qa-internal at redhat.com, vbellur at redhat.com
Blocks: 1734534
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1734534 +++
Description of problem:
When a user had edited the glusterd.vol file in /etc/glusterfs and updates the
glusterfs packages, bricks cannot contact glusterd.
Version-Release number of selected component (if applicable):
glusterfs-6 and above (including mainline)
How reproducible:
Always
Steps to Reproduce:
1. install glusterfs-5 or lower
2. create and start a volume
3. edit glusterd.vol and modify options like base port / max port
4. yum update gluster packages to glusterfs-6
5. restart volumes
Actual result:
bricks can't talk to glusterd
Expected result:
bricks should be able to talk to glusterd on 24007.
--- Additional comment from Raghavendra Talur on 2019-08-05 14:33:01 UTC ---
I think the change happened in c96778b354ea82943442aab158adbb854ca43a52 commit
upstream and I propose that we fix this problem by keeping the default in code
for glusterd and letting glusterd.vol override ride instead of having the value
only in glusterd.vol.
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1734534
[Bug 1734534] Upgrading a RHGS node fails when user edited glusterd.vol file
exists
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 03:22:26 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 03:22:26 +0000
Subject: [Bugs] [Bug 1737676] Upgrading a Gluster node fails when user
edited glusterd.vol file exists
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737676
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23160
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 03:22:27 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 03:22:27 +0000
Subject: [Bugs] [Bug 1737676] Upgrading a Gluster node fails when user
edited glusterd.vol file exists
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737676
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/23160 (rpc/transport: have default
listen-port) posted (#1) for review on master by Atin Mukherjee
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 05:11:56 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 05:11:56 +0000
Subject: [Bugs] [Bug 1734305] ctime: When healing ctime xattr for legacy
files, if multiple clients access and modify the same file,
the ctime might be updated incorrectly.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734305
Vivek Das changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |vdas at redhat.com
Blocks| |1696809
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 05:11:59 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 05:11:59 +0000
Subject: [Bugs] [Bug 1734305] ctime: When healing ctime xattr for legacy
files, if multiple clients access and modify the same file,
the ctime might be updated incorrectly.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734305
RHEL Product and Program Management changed:
What |Removed |Added
----------------------------------------------------------------------------
Rule Engine Rule| |Gluster: Auto pm_ack at Eng
| |In-Flight RHGS3.5 Blocker
| |BZs
Flags|rhgs-3.5.0? blocker? |rhgs-3.5.0+ blocker+
Rule Engine Rule| |665
Target Release|--- |RHGS 3.5.0
Rule Engine Rule| |666
Rule Engine Rule| |327
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 05:04:04 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 05:04:04 +0000
Subject: [Bugs] [Bug 1737313] (glusterfs-5.9) - GlusterFS 5.9 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737313
Amar Tumballi changed:
What |Removed |Added
----------------------------------------------------------------------------
Keywords| |Tracking
CC| |atumball at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 05:45:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 05:45:19 +0000
Subject: [Bugs] [Bug 1736345] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736345
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-06 05:45:19
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23150 (gfapi: Fix deadlock while processing
upcall) merged (#1) on release-7 by soumya k
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 05:45:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 05:45:19 +0000
Subject: [Bugs] [Bug 1733520] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733520
Bug 1733520 depends on bug 1736345, which changed state.
Bug 1736345 Summary: potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1736345
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 05:45:20 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 05:45:20 +0000
Subject: [Bugs] [Bug 1736341] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736341
Bug 1736341 depends on bug 1736345, which changed state.
Bug 1736345 Summary: potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1736345
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 05:45:21 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 05:45:21 +0000
Subject: [Bugs] [Bug 1736342] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736342
Bug 1736342 depends on bug 1736345, which changed state.
Bug 1736345 Summary: potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1736345
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 06:06:15 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 06:06:15 +0000
Subject: [Bugs] [Bug 1737288] nfs client gets bad ctime for copied file
which is on glusterfs disperse volume with ctime on
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737288
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-06 06:06:15
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23154 (features/utime: always update ctime
at setattr) merged (#2) on master by Kotresh HR
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 06:17:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 06:17:19 +0000
Subject: [Bugs] [Bug 1737705] New: ctime: nfs client gets bad ctime for
copied file which is on glusterfs disperse volume with ctime on
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737705
Bug ID: 1737705
Summary: ctime: nfs client gets bad ctime for copied file which
is on glusterfs disperse volume with ctime on
Product: Red Hat Gluster Storage
Version: rhgs-3.5
Status: NEW
Component: core
Severity: high
Priority: medium
Assignee: atumball at redhat.com
Reporter: khiremat at redhat.com
QA Contact: rhinduja at redhat.com
CC: atumball at redhat.com, bugs at gluster.org,
khiremat at redhat.com, kinglongmee at gmail.com,
rhs-bugs at redhat.com, sankarshan at redhat.com,
storage-qa-internal at redhat.com
Depends On: 1737288
Target Milestone: ---
Classification: Red Hat
+++ This bug was initially created as a clone of Bug #1737288 +++
Description of problem:
I have a 4+2 disperse volume with ctime on, and export a dir from nfs-ganesha,
storage.ctime: on
features.utime: on
When I copy a local file to nfs client, stat shows bad ctime for the file.
# stat /mnt/nfs/test*
File: ?/mnt/nfs/test1.sh?
Size: 166 Blocks: 4 IO Block: 1048576 regular file
Device: 27h/39d Inode: 10744358902712050257 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2019-08-05 09:49:00.000000000 +0800
Modify: 2019-08-05 09:49:00.000000000 +0800
Change: 2061-07-23 21:54:08.000000000 +0800
Birth: -
File: ?/mnt/nfs/test2.sh?
Size: 214 Blocks: 4 IO Block: 1048576 regular file
Device: 27h/39d Inode: 12073556847735387788 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2019-08-05 09:49:00.000000000 +0800
Modify: 2019-08-05 09:49:00.000000000 +0800
Change: 2061-07-23 21:54:08.000000000 +0800
Birth: -
# ps a
342188 pts/0 D+ 0:00 cp -i test1.sh test2.sh /mnt/nfs/
# gdb glusterfsd
(gdb) p *stbuf
$1 = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0,
ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0,
ia_atime = 174138658, ia_mtime = 2889352448, ia_ctime = 0, ia_btime = 0,
ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0,
ia_attributes = 0, ia_attributes_mask = 0,
ia_gfid = '\000' , ia_type = IA_INVAL, ia_prot = {
suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {
read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {
read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {
read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}
It is caused by nfs client create the copied file as EXCLUSIVE mode which
set a verifier, the verifier is set to file's atime and mtime.
nfs client set the verifier as,
if (flags & O_EXCL) {
data->arg.create.createmode = NFS3_CREATE_EXCLUSIVE;
data->arg.create.verifier[0] = cpu_to_be32(jiffies);
data->arg.create.verifier[1] = cpu_to_be32(current->pid);
}
the verifier[0] is set to file's atime, and verifier[1] is set to mtime.
But utime at storage/posix set the mtime to ctime too at setattr and set ctime
to a earlier time is not allowed.
/* Earlier, mdata was updated only if the existing time is less
* than the time to be updated. This would fail the scenarios
* where mtime can be set to any time using the syscall. Hence
* just updating without comparison. But the ctime is not
* allowed to changed to older date.
*/
The following codes is used to find those PIDs which may cause a bad ctime for
a copied file.
==========================================================================
#include
#include
int swap_endian(int val){
val = ((val << 8)&0xFF00FF00) | ((val >> 8)&0x00FF00FF);
return (val << 16)|(val >> 16);
}
// time of 2020/01/01 0:0:0
#define TO2020 1577808000
int main(int argc, char **argv)
{
unsigned int i = 0, val = 0;
for (i = 0; i < 500000; i++) {
val = swap_endian(i);
if (val > TO2020)
printf("%u %u\n", i, val);
}
return 0;
}
--- Additional comment from Worker Ant on 2019-08-05 03:18:00 UTC ---
REVIEW: https://review.gluster.org/23154 (features/utime: always update ctime
at setattr) posted (#1) for review on master by Kinglong Mee
--- Additional comment from Worker Ant on 2019-08-06 06:06:15 UTC ---
REVIEW: https://review.gluster.org/23154 (features/utime: always update ctime
at setattr) merged (#2) on master by Kotresh HR
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737288
[Bug 1737288] nfs client gets bad ctime for copied file which is on glusterfs
disperse volume with ctime on
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 06:17:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 06:17:19 +0000
Subject: [Bugs] [Bug 1737288] nfs client gets bad ctime for copied file
which is on glusterfs disperse volume with ctime on
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737288
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1737705
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737705
[Bug 1737705] ctime: nfs client gets bad ctime for copied file which is on
glusterfs disperse volume with ctime on
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 06:17:20 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 06:17:20 +0000
Subject: [Bugs] [Bug 1737705] ctime: nfs client gets bad ctime for copied
file which is on glusterfs disperse volume with ctime on
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737705
RHEL Product and Program Management changed:
What |Removed |Added
----------------------------------------------------------------------------
Rule Engine Rule| |Gluster: set proposed
| |release flag for new BZs at
| |RHGS
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 06:17:58 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 06:17:58 +0000
Subject: [Bugs] [Bug 1737705] ctime: nfs client gets bad ctime for copied
file which is on glusterfs disperse volume with ctime on
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737705
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|atumball at redhat.com |khiremat at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 06:18:55 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 06:18:55 +0000
Subject: [Bugs] [Bug 1737705] ctime: nfs client gets bad ctime for copied
file which is on glusterfs disperse volume with ctime on
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737705
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 06:39:19 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 06:39:19 +0000
Subject: [Bugs] [Bug 1697293] DHT: print hash and layout values in
hexadecimal format in the logs
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1697293
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-06 06:39:19
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23124 (cluster/dht: Log hashes in hex)
merged (#2) on master by N Balachandran
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 06:43:59 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 06:43:59 +0000
Subject: [Bugs] [Bug 1737712] New: Unable to create geo-rep session on a
non-root setup.
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737712
Bug ID: 1737712
Summary: Unable to create geo-rep session on a non-root setup.
Product: GlusterFS
Version: 6
Hardware: x86_64
OS: Linux
Status: NEW
Component: geo-replication
Keywords: Regression
Severity: high
Assignee: bugs at gluster.org
Reporter: khiremat at redhat.com
CC: avishwan at redhat.com, bugs at gluster.org,
csaba at redhat.com, khiremat at redhat.com,
kiyer at redhat.com, rhs-bugs at redhat.com,
sankarshan at redhat.com, storage-qa-internal at redhat.com
Depends On: 1734734, 1734738
Target Milestone: ---
Classification: Community
Description of problem:
Unable to create a non-root geo-rep session on a geo-rep setup.
Version-Release number of selected component (if applicable):
gluster-6.0
How reproducible:
Always
Steps to Reproduce:
1.Create a non-root geo-rep setup.
2.Try to create a non-root geo-rep session.
Actual results:
# gluster volume geo-replication master-rep geoaccount at 10.70.43.185::slave-rep
create push-pem gluster command not found on 10.70.43.185 for user geoaccount.
geo-replication command failed
Expected results:
# gluster volume geo-replication master-rep geoaccount at 10.70.43.185::slave-rep
Creating geo-replication session between master-rep &
geoaccount at 10.70.43.185::slave-rep has been successful
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1734734
[Bug 1734734] Unable to create geo-rep session on a non-root setup.
https://bugzilla.redhat.com/show_bug.cgi?id=1734738
[Bug 1734738] Unable to create geo-rep session on a non-root setup.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 06:43:59 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 06:43:59 +0000
Subject: [Bugs] [Bug 1734738] Unable to create geo-rep session on a non-root
setup.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734738
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1737712
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737712
[Bug 1737712] Unable to create geo-rep session on a non-root setup.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 06:44:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 06:44:17 +0000
Subject: [Bugs] [Bug 1737712] Unable to create geo-rep session on a non-root
setup.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737712
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |khiremat at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 06:47:24 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 06:47:24 +0000
Subject: [Bugs] [Bug 1737712] Unable to create geo-rep session on a non-root
setup.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737712
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23161
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 06:47:25 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 06:47:25 +0000
Subject: [Bugs] [Bug 1737712] Unable to create geo-rep session on a non-root
setup.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737712
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/23161 (geo-rep: Fix mount broker setup
issue) posted (#1) for review on release-6 by Kotresh HR
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 06:48:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 06:48:34 +0000
Subject: [Bugs] [Bug 1737716] New: Unable to create geo-rep session on a
non-root setup.
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737716
Bug ID: 1737716
Summary: Unable to create geo-rep session on a non-root setup.
Product: GlusterFS
Version: 5
Hardware: x86_64
OS: Linux
Status: NEW
Component: geo-replication
Keywords: Regression
Severity: high
Assignee: bugs at gluster.org
Reporter: khiremat at redhat.com
CC: avishwan at redhat.com, bugs at gluster.org,
csaba at redhat.com, khiremat at redhat.com,
kiyer at redhat.com, rhs-bugs at redhat.com,
sankarshan at redhat.com, storage-qa-internal at redhat.com
Depends On: 1734734, 1734738
Blocks: 1737712
Target Milestone: ---
Classification: Community
Description of problem:
Unable to create a non-root geo-rep session on a geo-rep setup.
Version-Release number of selected component (if applicable):
gluster-5.0
How reproducible:
Always
Steps to Reproduce:
1.Create a non-root geo-rep setup.
2.Try to create a non-root geo-rep session.
Actual results:
# gluster volume geo-replication master-rep geoaccount at 10.70.43.185::slave-rep
create push-pem gluster command not found on 10.70.43.185 for user geoaccount.
geo-replication command failed
Expected results:
# gluster volume geo-replication master-rep geoaccount at 10.70.43.185::slave-rep
Creating geo-replication session between master-rep &
geoaccount at 10.70.43.185::slave-rep has been successful
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1734734
[Bug 1734734] Unable to create geo-rep session on a non-root setup.
https://bugzilla.redhat.com/show_bug.cgi?id=1734738
[Bug 1734738] Unable to create geo-rep session on a non-root setup.
https://bugzilla.redhat.com/show_bug.cgi?id=1737712
[Bug 1737712] Unable to create geo-rep session on a non-root setup.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 06:48:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 06:48:34 +0000
Subject: [Bugs] [Bug 1734738] Unable to create geo-rep session on a non-root
setup.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734738
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1737716
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737716
[Bug 1737716] Unable to create geo-rep session on a non-root setup.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 06:48:34 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 06:48:34 +0000
Subject: [Bugs] [Bug 1737712] Unable to create geo-rep session on a non-root
setup.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737712
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1737716
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737716
[Bug 1737716] Unable to create geo-rep session on a non-root setup.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 07:02:29 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:02:29 +0000
Subject: [Bugs] [Bug 1737716] Unable to create geo-rep session on a non-root
setup.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737716
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |khiremat at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 07:03:43 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:03:43 +0000
Subject: [Bugs] [Bug 1737716] Unable to create geo-rep session on a non-root
setup.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737716
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23162
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 07:03:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:03:44 +0000
Subject: [Bugs] [Bug 1737716] Unable to create geo-rep session on a non-root
setup.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737716
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/23162 (geo-rep: Fix mount broker setup
issue) posted (#1) for review on release-5 by Kotresh HR
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 07:08:06 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:08:06 +0000
Subject: [Bugs] [Bug 1737676] Upgrading a Gluster node fails when user
edited glusterd.vol file exists
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737676
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-06 07:08:06
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23160 (rpc/transport: have default
listen-port) merged (#2) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 07:11:57 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:11:57 +0000
Subject: [Bugs] [Bug 1737745] New: ctime: When healing ctime xattr for
legacy files, if multiple clients access and modify the same file,
the ctime might be updated incorrectly.
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737745
Bug ID: 1737745
Summary: ctime: When healing ctime xattr for legacy files, if
multiple clients access and modify the same file, the
ctime might be updated incorrectly.
Product: GlusterFS
Version: 6
Hardware: x86_64
OS: Linux
Status: NEW
Component: ctime
Severity: high
Assignee: bugs at gluster.org
Reporter: khiremat at redhat.com
CC: bugs at gluster.org
Depends On: 1734299
Blocks: 1734305
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1734299 +++
Description of problem:
Ctime heals the ctime xattr ("trusted.glusterfs.mdata") in lookup
if it's not present. In a multi client scenario, there is a race
which results in updating the ctime xattr to older value.
e.g. Let c1 and c2 be two clients and file1 be the file which
doesn't have the ctime xattr. Let the ctime of file1 be t1.
(from backend, ctime heals time attributes from backend when not present).
Now following operations are done on mount
c1 -> ls -l /mnt1/file1 | c2 -> ls -l /mnt2/file1;echo "append" >>
/mnt2/file1;
The race is that the both c1 and c2 didn't fetch the ctime xattr in lookup,
so both of them tries to heal ctime to time 't1'. If c2 wins the race and
appends the file before c1 heals it, it sets the time to 't1' and updates
it to 't2' (because of append). Now c1 proceeds to heal and sets it to 't1'
which is incorrect.
Version-Release number of selected component (if applicable):
mainline
How reproducible:
Always
Steps to Reproduce:
1. Create single brick gluster volume and start it
2. Mount at /mnt1 and /mnt2
3. Disable ctime
gluster volume set ctime off
4. Create a file
touch /mnt/file1
5. Enable ctime
gluster volume set ctime on
6. Put a breakpoint at gf_utime_set_mdata_lookup_cbk on '/mnt1'
7. ls -l /mnt1/file1
This hits the break point, allow for root gfid and don't continue on
stbuf->ia_gfid equals to file1's gfid
8. ls -l /mnt2/file1
9. The ctime xattr is healed from /mnt2. Capture it.
getfattr -d -m . -e hex //file1 | grep mdata
10. echo "append" >> /mnt2/file1 and capture mdata
getfattr -d -m . -e hex //file1 | grep mdata
11. Continue the break point at step 7 and capture the mdata
Actual results:
mdata xattr at step 11 is equal to step 9 (Went back in time)
Expected results:
mdata xattr at step 11 should be equal to step 10
Additional info:
--- Additional comment from Worker Ant on 2019-07-30 08:14:18 UTC ---
REVIEW: https://review.gluster.org/23131 (posix/ctime: Fix race during lookup
ctime xattr heal) posted (#1) for review on master by Kotresh HR
--- Additional comment from Worker Ant on 2019-08-01 02:59:49 UTC ---
REVIEW: https://review.gluster.org/23131 (posix/ctime: Fix race during lookup
ctime xattr heal) merged (#2) on master by Amar Tumballi
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1734299
[Bug 1734299] ctime: When healing ctime xattr for legacy files, if multiple
clients access and modify the same file, the ctime might be updated
incorrectly.
https://bugzilla.redhat.com/show_bug.cgi?id=1734305
[Bug 1734305] ctime: When healing ctime xattr for legacy files, if multiple
clients access and modify the same file, the ctime might be updated
incorrectly.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 07:11:57 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:11:57 +0000
Subject: [Bugs] [Bug 1734299] ctime: When healing ctime xattr for legacy
files, if multiple clients access and modify the same file,
the ctime might be updated incorrectly.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734299
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1737745
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737745
[Bug 1737745] ctime: When healing ctime xattr for legacy files, if multiple
clients access and modify the same file, the ctime might be updated
incorrectly.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 07:11:57 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:11:57 +0000
Subject: [Bugs] [Bug 1734305] ctime: When healing ctime xattr for legacy
files, if multiple clients access and modify the same file,
the ctime might be updated incorrectly.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734305
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1737745
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737745
[Bug 1737745] ctime: When healing ctime xattr for legacy files, if multiple
clients access and modify the same file, the ctime might be updated
incorrectly.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 07:12:11 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:12:11 +0000
Subject: [Bugs] [Bug 1737745] ctime: When healing ctime xattr for legacy
files, if multiple clients access and modify the same file,
the ctime might be updated incorrectly.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737745
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |khiremat at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 07:15:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:15:17 +0000
Subject: [Bugs] [Bug 1737746] New: ctime: nfs client gets bad ctime for
copied file which is on glusterfs disperse volume with ctime on
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737746
Bug ID: 1737746
Summary: ctime: nfs client gets bad ctime for copied file which
is on glusterfs disperse volume with ctime on
Product: GlusterFS
Version: 6
Status: NEW
Component: ctime
Severity: high
Priority: medium
Assignee: bugs at gluster.org
Reporter: khiremat at redhat.com
CC: atumball at redhat.com, bugs at gluster.org,
khiremat at redhat.com, kinglongmee at gmail.com
Depends On: 1737288
Blocks: 1737705
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1737288 +++
Description of problem:
I have a 4+2 disperse volume with ctime on, and export a dir from nfs-ganesha,
storage.ctime: on
features.utime: on
When I copy a local file to nfs client, stat shows bad ctime for the file.
# stat /mnt/nfs/test*
File: ?/mnt/nfs/test1.sh?
Size: 166 Blocks: 4 IO Block: 1048576 regular file
Device: 27h/39d Inode: 10744358902712050257 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2019-08-05 09:49:00.000000000 +0800
Modify: 2019-08-05 09:49:00.000000000 +0800
Change: 2061-07-23 21:54:08.000000000 +0800
Birth: -
File: ?/mnt/nfs/test2.sh?
Size: 214 Blocks: 4 IO Block: 1048576 regular file
Device: 27h/39d Inode: 12073556847735387788 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2019-08-05 09:49:00.000000000 +0800
Modify: 2019-08-05 09:49:00.000000000 +0800
Change: 2061-07-23 21:54:08.000000000 +0800
Birth: -
# ps a
342188 pts/0 D+ 0:00 cp -i test1.sh test2.sh /mnt/nfs/
# gdb glusterfsd
(gdb) p *stbuf
$1 = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0,
ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0,
ia_atime = 174138658, ia_mtime = 2889352448, ia_ctime = 0, ia_btime = 0,
ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0,
ia_attributes = 0, ia_attributes_mask = 0,
ia_gfid = '\000' , ia_type = IA_INVAL, ia_prot = {
suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {
read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {
read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {
read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}
It is caused by nfs client create the copied file as EXCLUSIVE mode which
set a verifier, the verifier is set to file's atime and mtime.
nfs client set the verifier as,
if (flags & O_EXCL) {
data->arg.create.createmode = NFS3_CREATE_EXCLUSIVE;
data->arg.create.verifier[0] = cpu_to_be32(jiffies);
data->arg.create.verifier[1] = cpu_to_be32(current->pid);
}
the verifier[0] is set to file's atime, and verifier[1] is set to mtime.
But utime at storage/posix set the mtime to ctime too at setattr and set ctime
to a earlier time is not allowed.
/* Earlier, mdata was updated only if the existing time is less
* than the time to be updated. This would fail the scenarios
* where mtime can be set to any time using the syscall. Hence
* just updating without comparison. But the ctime is not
* allowed to changed to older date.
*/
The following codes is used to find those PIDs which may cause a bad ctime for
a copied file.
==========================================================================
#include
#include
int swap_endian(int val){
val = ((val << 8)&0xFF00FF00) | ((val >> 8)&0x00FF00FF);
return (val << 16)|(val >> 16);
}
// time of 2020/01/01 0:0:0
#define TO2020 1577808000
int main(int argc, char **argv)
{
unsigned int i = 0, val = 0;
for (i = 0; i < 500000; i++) {
val = swap_endian(i);
if (val > TO2020)
printf("%u %u\n", i, val);
}
return 0;
}
--- Additional comment from Worker Ant on 2019-08-05 03:18:00 UTC ---
REVIEW: https://review.gluster.org/23154 (features/utime: always update ctime
at setattr) posted (#1) for review on master by Kinglong Mee
--- Additional comment from Worker Ant on 2019-08-06 06:06:15 UTC ---
REVIEW: https://review.gluster.org/23154 (features/utime: always update ctime
at setattr) merged (#2) on master by Kotresh HR
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737288
[Bug 1737288] nfs client gets bad ctime for copied file which is on glusterfs
disperse volume with ctime on
https://bugzilla.redhat.com/show_bug.cgi?id=1737705
[Bug 1737705] ctime: nfs client gets bad ctime for copied file which is on
glusterfs disperse volume with ctime on
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 07:15:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:15:17 +0000
Subject: [Bugs] [Bug 1737288] nfs client gets bad ctime for copied file
which is on glusterfs disperse volume with ctime on
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737288
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1737746
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737746
[Bug 1737746] ctime: nfs client gets bad ctime for copied file which is on
glusterfs disperse volume with ctime on
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 07:15:17 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:15:17 +0000
Subject: [Bugs] [Bug 1737705] ctime: nfs client gets bad ctime for copied
file which is on glusterfs disperse volume with ctime on
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737705
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1737746
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737746
[Bug 1737746] ctime: nfs client gets bad ctime for copied file which is on
glusterfs disperse volume with ctime on
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 07:15:33 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:15:33 +0000
Subject: [Bugs] [Bug 1737746] ctime: nfs client gets bad ctime for copied
file which is on glusterfs disperse volume with ctime on
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737746
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
Assignee|bugs at gluster.org |khiremat at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 07:27:32 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:27:32 +0000
Subject: [Bugs] [Bug 1737745] ctime: When healing ctime xattr for legacy
files, if multiple clients access and modify the same file,
the ctime might be updated incorrectly.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737745
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23163
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 07:27:33 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:27:33 +0000
Subject: [Bugs] [Bug 1737745] ctime: When healing ctime xattr for legacy
files, if multiple clients access and modify the same file,
the ctime might be updated incorrectly.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737745
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/23163 (posix/ctime: Fix race during lookup
ctime xattr heal) posted (#1) for review on release-6 by Kotresh HR
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 07:28:38 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:28:38 +0000
Subject: [Bugs] [Bug 1737746] ctime: nfs client gets bad ctime for copied
file which is on glusterfs disperse volume with ctime on
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737746
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23164
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 07:28:39 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:28:39 +0000
Subject: [Bugs] [Bug 1737746] ctime: nfs client gets bad ctime for copied
file which is on glusterfs disperse volume with ctime on
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737746
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/23164 (features/utime: always update ctime
at setattr) posted (#1) for review on release-6 by Kotresh HR
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 07:30:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:30:53 +0000
Subject: [Bugs] [Bug 1734305] ctime: When healing ctime xattr for legacy
files, if multiple clients access and modify the same file,
the ctime might be updated incorrectly.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734305
Sunil Kumar Acharya changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |sheggodu at redhat.com
Flags| |needinfo?(khiremat at redhat.c
| |om)
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 07:42:53 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 07:42:53 +0000
Subject: [Bugs] [Bug 1734305] ctime: When healing ctime xattr for legacy
files, if multiple clients access and modify the same file,
the ctime might be updated incorrectly.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734305
Atin Mukherjee changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |amukherj at redhat.com
Flags|needinfo?(khiremat at redhat.c |
|om) |
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 08:22:15 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 08:22:15 +0000
Subject: [Bugs] [Bug 1734305] ctime: When healing ctime xattr for legacy
files, if multiple clients access and modify the same file,
the ctime might be updated incorrectly.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734305
Sunil Kumar Acharya changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |MODIFIED
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 08:34:27 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 08:34:27 +0000
Subject: [Bugs] [Bug 1737778] New: ocf resource agent for volumes don't work
in non-standard environment
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737778
Bug ID: 1737778
Summary: ocf resource agent for volumes don't work in
non-standard environment
Product: GlusterFS
Version: 4.1
Status: NEW
Component: scripts
Assignee: bugs at gluster.org
Reporter: jiri.lunacek at hosting90.cz
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
ocf resource agent for volumes don't work when short hostnames don't match
gluster peer names and when volume is not defined across all peers
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 08:39:14 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 08:39:14 +0000
Subject: [Bugs] [Bug 1732774] Disperse volume : data corruption with
ftruncate data in 4+2 config
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1732774
Rejy M Cyriac changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags| |blocker?
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 08:40:14 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 08:40:14 +0000
Subject: [Bugs] [Bug 1732792] Disperse volume : data corruption with
ftruncate data in 4+2 config
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1732792
Rejy M Cyriac changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags| |blocker?
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 08:40:27 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 08:40:27 +0000
Subject: [Bugs] [Bug 1732793] I/O error on writes to a disperse volume when
replace-brick is executed
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1732793
Rejy M Cyriac changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags| |blocker?
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 08:46:54 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 08:46:54 +0000
Subject: [Bugs] [Bug 1737778] ocf resource agent for volumes don't work in
non-standard environment
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737778
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23165
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 08:46:55 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 08:46:55 +0000
Subject: [Bugs] [Bug 1737778] ocf resource agent for volumes don't work in
non-standard environment
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737778
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/23165 (peer_map parameter and fix in state
detection when no brick is running on peer) posted (#1) for review on master by
None
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 09:03:42 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 09:03:42 +0000
Subject: [Bugs] [Bug 1732774] Disperse volume : data corruption with
ftruncate data in 4+2 config
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1732774
Ashish Pandey changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 09:12:59 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 09:12:59 +0000
Subject: [Bugs] [Bug 1735514] Open fd heal should filter O_APPEND/O_EXCL
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1735514
Ashish Pandey changed:
What |Removed |Added
----------------------------------------------------------------------------
Doc Type|If docs needed, set a value |No Doc Update
Red Hat Bugzilla changed:
What |Removed |Added
----------------------------------------------------------------------------
Doc Type|No Doc Update |No Doc Update
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 09:13:58 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 09:13:58 +0000
Subject: [Bugs] [Bug 1732770] fix truncate lock to cover the write in
tuncate clean
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1732770
Ashish Pandey changed:
What |Removed |Added
----------------------------------------------------------------------------
Doc Type|If docs needed, set a value |No Doc Update
Red Hat Bugzilla changed:
What |Removed |Added
----------------------------------------------------------------------------
Doc Type|No Doc Update |No Doc Update
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 09:19:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 09:19:23 +0000
Subject: [Bugs] [Bug 1729108] Memory leak in glusterfsd process
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1729108
Red Hat Bugzilla changed:
What |Removed |Added
----------------------------------------------------------------------------
Doc Type|If docs needed, set a value |No Doc Update
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 09:34:48 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 09:34:48 +0000
Subject: [Bugs] [Bug 1737484] geo-rep syncing significantly behind and also
only one of the directories are synced with tracebacks seen
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737484
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23158 (geo-rep: Fix Config Get Race) merged
(#2) on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 09:39:25 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 09:39:25 +0000
Subject: [Bugs] [Bug 1733520] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733520
Red Hat Bugzilla changed:
What |Removed |Added
----------------------------------------------------------------------------
Doc Type|If docs needed, set a value |No Doc Update
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 09:52:02 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 09:52:02 +0000
Subject: [Bugs] [Bug 1734423] interrupts leak memory
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1734423
nchilaka changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |nchilaka at redhat.com
QA Contact|rhinduja at redhat.com |nchilaka at redhat.com
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 10:38:09 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 10:38:09 +0000
Subject: [Bugs] [Bug 1727727] Build+Packaging Automation
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1727727
hari gowtham changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags| |needinfo?(mscherer at redhat.c
| |om)
--- Comment #9 from hari gowtham ---
Hi Misc,
Can you please create the machines as mentioned above, so we can setup them up?
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 10:58:43 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 10:58:43 +0000
Subject: [Bugs] [Bug 1620580] Deleted a volume and created a new volume with
similar but not the same name. The kubernetes pod still keeps on running
and doesn't crash. Still possible to write to gluster mount
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1620580
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23166
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 10:58:44 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 10:58:44 +0000
Subject: [Bugs] [Bug 1620580] Deleted a volume and created a new volume with
similar but not the same name. The kubernetes pod still keeps on running
and doesn't crash. Still possible to write to gluster mount
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1620580
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #3 from Worker Ant ---
REVIEW: https://review.gluster.org/23166 (protocol/handshake: pass volume-id
for extra check) posted (#1) for review on master by Amar Tumballi
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 10:59:23 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 10:59:23 +0000
Subject: [Bugs] [Bug 1737716] Unable to create geo-rep session on a non-root
setup.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737716
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
Last Closed| |2019-08-06 10:59:23
--- Comment #2 from Worker Ant ---
REVIEW: https://review.gluster.org/23162 (geo-rep: Fix mount broker setup
issue) merged (#1) on release-5 by Kotresh HR
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 10:59:24 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 10:59:24 +0000
Subject: [Bugs] [Bug 1737712] Unable to create geo-rep session on a non-root
setup.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737712
Bug 1737712 depends on bug 1737716, which changed state.
Bug 1737716 Summary: Unable to create geo-rep session on a non-root setup.
https://bugzilla.redhat.com/show_bug.cgi?id=1737716
What |Removed |Added
----------------------------------------------------------------------------
Status|POST |CLOSED
Resolution|--- |NEXTRELEASE
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 05:04:04 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 05:04:04 +0000
Subject: [Bugs] [Bug 1737313] (glusterfs-5.9) - GlusterFS 5.9 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737313
hari gowtham changed:
What |Removed |Added
----------------------------------------------------------------------------
Depends On| |1737716, 1736342, 1733881
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1733881
[Bug 1733881] [geo-rep]: gluster command not found while setting up a non-root
session
https://bugzilla.redhat.com/show_bug.cgi?id=1736342
[Bug 1736342] potential deadlock while processing callbacks in gfapi
https://bugzilla.redhat.com/show_bug.cgi?id=1737716
[Bug 1737716] Unable to create geo-rep session on a non-root setup.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 11:07:50 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 11:07:50 +0000
Subject: [Bugs] [Bug 1733881] [geo-rep]: gluster command not found while
setting up a non-root session
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1733881
hari gowtham changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1737313
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737313
[Bug 1737313] (glusterfs-5.9) - GlusterFS 5.9 tracker
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 11:07:50 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 11:07:50 +0000
Subject: [Bugs] [Bug 1736342] potential deadlock while processing callbacks
in gfapi
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1736342
hari gowtham changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1737313
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737313
[Bug 1737313] (glusterfs-5.9) - GlusterFS 5.9 tracker
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 11:07:50 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 11:07:50 +0000
Subject: [Bugs] [Bug 1737716] Unable to create geo-rep session on a non-root
setup.
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737716
hari gowtham changed:
What |Removed |Added
----------------------------------------------------------------------------
Blocks| |1737313
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1737313
[Bug 1737313] (glusterfs-5.9) - GlusterFS 5.9 tracker
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Tue Aug 6 11:08:41 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 11:08:41 +0000
Subject: [Bugs] [Bug 1737313] (glusterfs-5.9) - GlusterFS 5.9 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737313
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
External Bug ID| |Gluster.org Gerrit 23167
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 11:08:42 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 11:08:42 +0000
Subject: [Bugs] [Bug 1737313] (glusterfs-5.9) - GlusterFS 5.9 tracker
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1737313
Worker Ant changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |POST
--- Comment #1 from Worker Ant ---
REVIEW: https://review.gluster.org/23167 (doc: Added release 5.9 notes) posted
(#1) for review on release-5 by hari gowtham
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 11:12:00 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 11:12:00 +0000
Subject: [Bugs] [Bug 1726175] CentOs 6 GlusterFS client creates files with
time 01/01/1970
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1726175
Kotresh HR changed:
What |Removed |Added
----------------------------------------------------------------------------
Component|fuse |ctime
Assignee|khiremat at redhat.com |bugs at gluster.org
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
From bugzilla at redhat.com Tue Aug 6 11:28:27 2019
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Tue, 06 Aug 2019 11:28:27 +0000
Subject: [Bugs] [Bug 1641969] Mounted Dir Gets Error in GlusterFS Storage
Cluster with SSL/TLS Encryption as Doing add-brick and remove-brick
Repeatly
In-Reply-To: