[Bugs] [Bug 1732875] New: GlusterFS 7.0 tracker

bugzilla at redhat.com bugzilla at redhat.com
Wed Jul 24 15:04:19 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1732875

            Bug ID: 1732875
           Summary: GlusterFS 7.0 tracker
           Product: GlusterFS
           Version: 7
            Status: NEW
         Component: core
          Keywords: Reopened, Tracking, Triaged
          Assignee: bugs at gluster.org
          Reporter: rkothiya at redhat.com
                CC: bugs at gluster.org, guillaume.pavese at interact-iv.com,
                    moagrawa at redhat.com, pasik at iki.fi,
                    rgowdapp at redhat.com, sheggodu at redhat.com,
                    srangana at redhat.com
        Depends On: 1670718, 1672318, 1672818 (glusterfs-6.0), 1673972,
                    1674364, 1676356, 1676429, 1679275, 1679892, 1680585,
                    1680586, 1683574, 1683880, 1684029, 1684385, 1685771,
                    1686364, 1686875, 1687672, 1691616, 1695416, 1696147,
                    1696513, 1700865
  Target Milestone: ---
    Classification: Community



+++ This bug was initially created as a clone of Bug #1672818 +++

Tracker for the release 7.0


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1670718
[Bug 1670718] md-cache should be loaded at a position in graph where it sees
stats in write cbk
https://bugzilla.redhat.com/show_bug.cgi?id=1672318
[Bug 1672318] "failed to fetch volume file" when trying to activate host in DC
with glusterfs 3.12 domains
https://bugzilla.redhat.com/show_bug.cgi?id=1672818
[Bug 1672818] GlusterFS 6.0 tracker
https://bugzilla.redhat.com/show_bug.cgi?id=1673972
[Bug 1673972] insufficient logging in glusterd_resolve_all_bricks
https://bugzilla.redhat.com/show_bug.cgi?id=1674364
[Bug 1674364] glusterfs-fuse client not benefiting from page cache on read
after write
https://bugzilla.redhat.com/show_bug.cgi?id=1676356
[Bug 1676356] glusterfs FUSE client crashing every few days with 'Failed to
dispatch handler'
https://bugzilla.redhat.com/show_bug.cgi?id=1676429
[Bug 1676429] distribute: Perf regression in mkdir path
https://bugzilla.redhat.com/show_bug.cgi?id=1679275
[Bug 1679275] dht: fix double extra unref of inode at heal path
https://bugzilla.redhat.com/show_bug.cgi?id=1679892
[Bug 1679892] assertion failure log in glusterd.log file when a volume start is
triggered
https://bugzilla.redhat.com/show_bug.cgi?id=1680585
[Bug 1680585] remove glupy from code and build
https://bugzilla.redhat.com/show_bug.cgi?id=1680586
[Bug 1680586] Building RPM packages with _for_fedora_koji_builds enabled fails
on el6
https://bugzilla.redhat.com/show_bug.cgi?id=1683574
[Bug 1683574] gluster-server package currently requires the older userspace-rcu
against expectation
https://bugzilla.redhat.com/show_bug.cgi?id=1683880
[Bug 1683880] Multiple shd processes are running on brick_mux environmet
https://bugzilla.redhat.com/show_bug.cgi?id=1684029
[Bug 1684029] upgrade from 3.12, 4.1 and 5 to 6 broken
https://bugzilla.redhat.com/show_bug.cgi?id=1684385
[Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to
shard on-disk xattrs disappearing
https://bugzilla.redhat.com/show_bug.cgi?id=1685771
[Bug 1685771] glusterd memory usage grows at 98 MB/h while being monitored by
RHGSWA
https://bugzilla.redhat.com/show_bug.cgi?id=1686364
[Bug 1686364] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to
shard on-disk xattrs disappearing
https://bugzilla.redhat.com/show_bug.cgi?id=1686875
[Bug 1686875] packaging: rdma on s390x, unnecessary ldconfig scriptlets
https://bugzilla.redhat.com/show_bug.cgi?id=1687672
[Bug 1687672] [geo-rep]: Checksum mismatch when 2x2 vols are converted to
arbiter
https://bugzilla.redhat.com/show_bug.cgi?id=1691616
[Bug 1691616] client log flooding with intentional socket shutdown message when
a brick is down
https://bugzilla.redhat.com/show_bug.cgi?id=1695416
[Bug 1695416] client log flooding with intentional socket shutdown message when
a brick is down
https://bugzilla.redhat.com/show_bug.cgi?id=1696147
[Bug 1696147] Multiple shd processes are running on brick_mux environmet
https://bugzilla.redhat.com/show_bug.cgi?id=1696513
[Bug 1696513] Multiple shd processes are running on brick_mux environmet
https://bugzilla.redhat.com/show_bug.cgi?id=1700865
[Bug 1700865] FUSE mount seems to be hung and not accessible
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list