From bugzilla at redhat.com Mon Jan 4 13:03:28 2021
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Jan 2021 13:03:28 +0000
Subject: [Bugs] [Bug 1752739] fuse mount crash observed with sharding +
truncate
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1752739
SATHEESARAN changed:
What |Removed |Added
----------------------------------------------------------------------------
Summary|Issues seen with sharding + |fuse mount crash observed
|truncate |with sharding + truncate
--- Comment #24 from SATHEESARAN ---
When this issue was found, it was initally complained that the fuse mount crash
was seen, later through investigation it was found to be 2 seperate problems
1. Crash in FOPs because of integer overflow
2. Sharding doesn't support truncate
Currently the fix addresses only 1. Excerpt from the commit message:
"This patch fixes a crash in FOPs that operate on really large sharded
files where number of participant shards could sometimes exceed
signed int32 max."
But the other issue of sharding to support truncate is not yet available,
which will be tracked with a separate bug.
Based on this, updating the bug summary.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Mon Jan 4 13:09:00 2021
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Mon, 04 Jan 2021 13:09:00 +0000
Subject: [Bugs] [Bug 1752739] fuse mount crash observed with sharding +
truncate
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1752739
SATHEESARAN changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ON_QA |VERIFIED
--- Comment #25 from SATHEESARAN ---
Tested with RHGS 3.5.4 interim build ( glusterfs-6.0-51.el8rhgs )
1. Created a replica 3 volume and enabled sharding
2. Created 2 fuse mounts
3. Created a file of size 512B from one of the fuse mount
4. From one fuse mount truncate that file to 0
5. From another fuse mount truncate the same file to 0 again.
I/O errors are noted as sharding doesn't support truncate from day1.
But there are no crashes observed.
Also tested this scenario in RHHI-V setup, with 10 VMs running workloads
There are no issues seen.
--
You are receiving this mail because:
You are on the CC list for the bug.
From bugzilla at redhat.com Fri Jan 15 07:32:07 2021
From: bugzilla at redhat.com (bugzilla at redhat.com)
Date: Fri, 15 Jan 2021 07:32:07 +0000
Subject: [Bugs] [Bug 1425296] qemu_gluster_co_get_block_status gets SIGABRT
when doing blockcommit continually
In-Reply-To:
References:
Message-ID:
https://bugzilla.redhat.com/show_bug.cgi?id=1425296
Bug 1425296 depends on bug 1425293, which changed state.
Bug 1425293 Summary: qemu_gluster_co_get_block_status gets SIGABRT when doing blockcommit continually
https://bugzilla.redhat.com/show_bug.cgi?id=1425293
What |Removed |Added
----------------------------------------------------------------------------
Status|ASSIGNED |CLOSED
Resolution|--- |WONTFIX
--
You are receiving this mail because:
You are on the CC list for the bug.