[Gluster-devel] Weekly Untriaged Bugs

Atin Mukherjee amukherj at redhat.com
Sun Apr 28 13:13:22 UTC 2019

While I understand this report captured bugs filed since last 1 week and do
not have ‘Triaged’ keyword, does it make better sense to exclude bugs which
aren’t in NEW state?

I believe the intention of this report is to check what all bugs haven’t
been looked at by maintainers/developers yet. BZs which are already fixed
or in ASSIGNED/POST state need not to feature in this list is what I
believe as otherwise it gives a false impression that too many bugs are
getting unnoticed which isn’t the reality. Thoughts?

On Mon, 22 Apr 2019 at 07:15, <jenkins at build.gluster.org> wrote:

> [...truncated 6 lines...]
> https://bugzilla.redhat.com/1699023 / core: Brick is not able to detach
> successfully in brick_mux environment
> https://bugzilla.redhat.com/1695416 / core: client log flooding with
> intentional socket shutdown message when a brick is down
> https://bugzilla.redhat.com/1695480 / core: Global Thread Pool
> https://bugzilla.redhat.com/1694943 / core: parallel-readdir slows down
> directory listing
> https://bugzilla.redhat.com/1700295 / core: The data couldn't be flushed
> immediately even with O_SYNC in glfs_create or with
> glfs_fsync/glfs_fdatasync after glfs_write.
> https://bugzilla.redhat.com/1698861 / disperse: Renaming a directory when
> 2 bricks of multiple disperse subvols are down leaves both old and new dirs
> on the bricks.
> https://bugzilla.redhat.com/1697293 / distribute: DHT: print hash and
> layout values in hexadecimal format in the logs
> https://bugzilla.redhat.com/1701039 / distribute: gluster replica 3
> arbiter Unfortunately data not distributed equally
> https://bugzilla.redhat.com/1697971 / fuse: Segfault in FUSE process,
> potential use after free
> https://bugzilla.redhat.com/1694139 / glusterd: Error waiting for job
> 'heketi-storage-copy-job' to complete on one-node k3s deployment.
> https://bugzilla.redhat.com/1695099 / glusterd: The number of glusterfs
> processes keeps increasing, using all available resources
> https://bugzilla.redhat.com/1692349 / project-infrastructure:
> gluster-csi-containers job is failing
> https://bugzilla.redhat.com/1698716 / project-infrastructure: Regression
> job did not vote for https://review.gluster.org/#/c/glusterfs/+/22366/
> https://bugzilla.redhat.com/1698694 / project-infrastructure: regression
> job isn't voting back to gerrit
> https://bugzilla.redhat.com/1699712 / project-infrastructure: regression
> job is voting Success even in case of failure
> https://bugzilla.redhat.com/1693385 / project-infrastructure: request to
> change the version of fedora in fedora-smoke-job
> https://bugzilla.redhat.com/1695484 / project-infrastructure: smoke fails
> with "Build root is locked by another process"
> https://bugzilla.redhat.com/1693184 / replicate: A brick
> process(glusterfsd) died with 'memory violation'
> https://bugzilla.redhat.com/1698566 / selfheal: shd crashed while
> executing ./tests/bugs/core/bug-1432542-mpx-restart-crash.t in CI
> https://bugzilla.redhat.com/1699309 / snapshot: Gluster snapshot fails
> with systemd autmounted bricks
> https://bugzilla.redhat.com/1696633 / tests: GlusterFs v4.1.5 Tests from
> /tests/bugs/ module failing on Intel
> https://bugzilla.redhat.com/1697812 / website: mention a pointer to all
> the mailing lists available under glusterfs project(
> https://www.gluster.org/community/)
> [...truncated 2 lines...]_______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel

- Atin (atinm)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190428/dac7d988/attachment.html>

More information about the Gluster-devel mailing list