December 2016 Archives by author
Starting: Thu Dec 1 00:28:02 UTC 2016
Ending: Sat Dec 31 11:44:28 UTC 2016
Messages: 1732
- [Bugs] [Bug 1398602] DHT hash ranges are not distributed across two subvols
bugzilla at redhat.com
- [Bugs] [Bug 1399891] [compound FOPs]: Memory leak while doing FOPs with brick down
bugzilla at redhat.com
- [Bugs] [Bug 1400013] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400392] New: no free loop back device in slave25.cloud.gluster.org
bugzilla at redhat.com
- [Bugs] [Bug 1399592] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1399592] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1400392] no free loop back device in slave25.cloud.gluster.org
bugzilla at redhat.com
- [Bugs] [Bug 1397052] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1397052] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1397052] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1400026] Duplicate value assigned to GD_MSG_DAEMON_STATE_REQ_RCVD and GD_MSG_BRICK_CLEANUP_SUCCESS messages
bugzilla at redhat.com
- [Bugs] [Bug 1395628] Labelled geo-rep checkpoints hide geo-replication status
bugzilla at redhat.com
- [Bugs] [Bug 1399088] geo-replica slave node goes faulty for non-root user session due to fail to locate gluster binary
bugzilla at redhat.com
- [Bugs] [Bug 1398554] Rename is failing with ENOENT while remove-brick start operation is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1369077] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1376464] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1398602] DHT hash ranges are not distributed across two subvols
bugzilla at redhat.com
- [Bugs] [Bug 1393282] (glusterfs-3.7.18) Tracker bug for GlusterFS-v3.7.18
bugzilla at redhat.com
- [Bugs] [Bug 1397430] PEER_REJECT, EVENT_BRICKPATH_RESOLVE_FAILED, EVENT_COMPARE_FRIEND_VOLUME_FAILED are not seen
bugzilla at redhat.com
- [Bugs] [Bug 1369077] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1369077] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1399593] Obvious typo in cleanup code in rpc_clnt_notify
bugzilla at redhat.com
- [Bugs] [Bug 1399592] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1138229] Disconnections from glusterfs through libgfapi
bugzilla at redhat.com
- [Bugs] [Bug 1356824] glusterfs-3.7.1-16 problem with size difference in listing dir and file
bugzilla at redhat.com
- [Bugs] [Bug 1398602] DHT hash ranges are not distributed across two subvols
bugzilla at redhat.com
- [Bugs] [Bug 1400013] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400013] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1198849] Minor improvements and cleanup for the build system
bugzilla at redhat.com
- [Bugs] [Bug 1400458] New: [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400459] New: [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400013] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400013] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400013] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400460] New: [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400458] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400459] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400459] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400460] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400460] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400459] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400460] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1393694] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1400458] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400458] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool
bugzilla at redhat.com
- [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool
bugzilla at redhat.com
- [Bugs] [Bug 1376464] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1399154] After ganesha node reboot/shutdown, portblock process goes to FAILED state
bugzilla at redhat.com
- [Bugs] [Bug 1376464] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1400392] no free loop back device in slave25.cloud.gluster.org
bugzilla at redhat.com
- [Bugs] [Bug 1395648] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1369077] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1376464] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1376464] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1369077] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1399031] build: add systemd dependency to glusterfs sub-package
bugzilla at redhat.com
- [Bugs] [Bug 1378842] [RFE] 'gluster volume get' should implement the way to retrieve volume options using the volume name 'all'
bugzilla at redhat.com
- [Bugs] [Bug 1366648] [GSS] A hot tier brick becomes full, causing the entire volume to have issues and returns stale file handle and input /output error.
bugzilla at redhat.com
- [Bugs] [Bug 1366648] [GSS] A hot tier brick becomes full, causing the entire volume to have issues and returns stale file handle and input /output error.
bugzilla at redhat.com
- [Bugs] [Bug 1366648] [GSS] A hot tier brick becomes full, causing the entire volume to have issues and returns stale file handle and input /output error.
bugzilla at redhat.com
- [Bugs] [Bug 1399154] After ganesha node reboot/shutdown, portblock process goes to FAILED state
bugzilla at redhat.com
- [Bugs] [Bug 1395648] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1395649] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1399995] Dump volume specific options in get-state output in a more parseable manner
bugzilla at redhat.com
- [Bugs] [Bug 1395649] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1399995] Dump volume specific options in get-state output in a more parseable manner
bugzilla at redhat.com
- [Bugs] [Bug 1399995] Dump volume specific options in get-state output in a more parseable manner
bugzilla at redhat.com
- [Bugs] [Bug 1400545] New: Dump volume specific options in get-state output in a more parseable manner
bugzilla at redhat.com
- [Bugs] [Bug 1395652] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1399154] After ganesha node reboot/shutdown, portblock process goes to FAILED state
bugzilla at redhat.com
- [Bugs] [Bug 1400546] New: After ganesha node reboot/shutdown, portblock process goes to FAILED state
bugzilla at redhat.com
- [Bugs] [Bug 1395652] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1400545] Dump volume specific options in get-state output in a more parseable manner
bugzilla at redhat.com
- [Bugs] [Bug 1400546] After ganesha node reboot/shutdown, portblock process goes to FAILED state
bugzilla at redhat.com
- [Bugs] [Bug 1400545] Dump volume specific options in get-state output in a more parseable manner
bugzilla at redhat.com
- [Bugs] [Bug 1400237] Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1400572] New: Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1400572] Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1400237] Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1400572] Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1400573] New: Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1397257] capture volume tunables in get-state dump
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1397795] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1400572] Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1400573] Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1376464] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1399031] build: add systemd dependency to glusterfs sub-package
bugzilla at redhat.com
- [Bugs] [Bug 1400237] Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1400237] Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1400572] Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1400572] Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1400573] Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1400613] New: [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1397052] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1377062] /var/tmp/rpm-tmp.KPCugR: line 2: /bin/systemctl: No such file or directory
bugzilla at redhat.com
- [Bugs] [Bug 1399031] build: add systemd dependency to glusterfs sub-package
bugzilla at redhat.com
- [Bugs] [Bug 1400635] New: build: add systemd dependency to glusterfs sub-package
bugzilla at redhat.com
- [Bugs] [Bug 1400635] build: add systemd dependency to glusterfs sub-package
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1400635] build: add systemd dependency to glusterfs sub-package
bugzilla at redhat.com
- [Bugs] [Bug 1400635] build: add systemd dependency to glusterfs sub-package
bugzilla at redhat.com
- [Bugs] [Bug 1400546] After ganesha node reboot/shutdown, portblock process goes to FAILED state
bugzilla at redhat.com
- [Bugs] [Bug 1399186] [GANESHA] Export ID changed during volume start and stop with message " lookup_export failed with Export id not found" in ganesha.log
bugzilla at redhat.com
- [Bugs] [Bug 1395649] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1395652] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1366648] [GSS] A hot tier brick becomes full, causing the entire volume to have issues and returns stale file handle and input /output error.
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1394482] A hot tier brick becomes full, causing the entire volume to have issues and returns stale file handle and input /output error.
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1394482] A hot tier brick becomes full, causing the entire volume to have issues and returns stale file handle and input /output error.
bugzilla at redhat.com
- [Bugs] [Bug 1394482] A hot tier brick becomes full, causing the entire volume to have issues and returns stale file handle and input /output error.
bugzilla at redhat.com
- [Bugs] [Bug 1400635] build: add systemd dependency to glusterfs sub-package
bugzilla at redhat.com
- [Bugs] [Bug 1377062] /var/tmp/rpm-tmp.KPCugR: line 2: /bin/systemctl: No such file or directory
bugzilla at redhat.com
- [Bugs] [Bug 1400545] Dump volume specific options in get-state output in a more parseable manner
bugzilla at redhat.com
- [Bugs] [Bug 1400545] Dump volume specific options in get-state output in a more parseable manner
bugzilla at redhat.com
- [Bugs] [Bug 1399450] Backport few of the md-cache enhancements from master to 3.9
bugzilla at redhat.com
- [Bugs] [Bug 1399450] Backport few of the md-cache enhancements from master to 3.9
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1397419] glusterfs_ctx_defaults_init is re-initializing ctx->locks
bugzilla at redhat.com
- [Bugs] [Bug 1397419] glusterfs_ctx_defaults_init is re-initializing ctx->locks
bugzilla at redhat.com
- [Bugs] [Bug 1397419] glusterfs_ctx_defaults_init is re-initializing ctx->locks
bugzilla at redhat.com
- [Bugs] [Bug 1400802] New: glusterfs_ctx_defaults_init is re-initializing ctx-> locks
bugzilla at redhat.com
- [Bugs] [Bug 1397419] glusterfs_ctx_defaults_init is re-initializing ctx->locks
bugzilla at redhat.com
- [Bugs] [Bug 1400803] New: glusterfs_ctx_defaults_init is re-initializing ctx-> locks
bugzilla at redhat.com
- [Bugs] [Bug 1400802] glusterfs_ctx_defaults_init is re-initializing ctx->locks
bugzilla at redhat.com
- [Bugs] [Bug 1400803] glusterfs_ctx_defaults_init is re-initializing ctx->locks
bugzilla at redhat.com
- [Bugs] [Bug 1400802] glusterfs_ctx_defaults_init is re-initializing ctx->locks
bugzilla at redhat.com
- [Bugs] [Bug 1400803] glusterfs_ctx_defaults_init is re-initializing ctx->locks
bugzilla at redhat.com
- [Bugs] [Bug 1392646] Client crash
bugzilla at redhat.com
- [Bugs] [Bug 1396328] after write some files into the gluster DFS volume successful, when we read one of these files will warn cannot get the file stat, but other file can read
bugzilla at redhat.com
- [Bugs] [Bug 1388499] GlusterFS - Server halts updateprocess
bugzilla at redhat.com
- [Bugs] [Bug 1398381] Hole punch does not report correct size
bugzilla at redhat.com
- [Bugs] [Bug 1398829] gluster can't heal the gfid mismatch file, suggest to heal with the new one.
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1400802] glusterfs_ctx_defaults_init is re-initializing ctx->locks
bugzilla at redhat.com
- [Bugs] [Bug 1400803] glusterfs_ctx_defaults_init is re-initializing ctx->locks
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1400818] New: possible memory leak on client when writing to a file while another client issues a truncate
bugzilla at redhat.com
- [Bugs] [Bug 1400818] possible memory leak on client when writing to a file while another client issues a truncate
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1400818] possible memory leak on client when writing to a file while another client issues a truncate
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1400818] possible memory leak on client when writing to a file while another client issues a truncate
bugzilla at redhat.com
- [Bugs] [Bug 1400818] possible memory leak on client when writing to a file while another client issues a truncate
bugzilla at redhat.com
- [Bugs] [Bug 1400833] New: possible memory leak on client when writing to a file while another client issues a truncate
bugzilla at redhat.com
- [Bugs] [Bug 1400833] possible memory leak on client when writing to a file while another client issues a truncate
bugzilla at redhat.com
- [Bugs] [Bug 1400818] possible memory leak on client when writing to a file while another client issues a truncate
bugzilla at redhat.com
- [Bugs] [Bug 1400833] possible memory leak on client when writing to a file while another client issues a truncate
bugzilla at redhat.com
- [Bugs] [Bug 1400833] possible memory leak on client when writing to a file while another client issues a truncate
bugzilla at redhat.com
- [Bugs] [Bug 1400818] possible memory leak on client when writing to a file while another client issues a truncate
bugzilla at redhat.com
- [Bugs] [Bug 1399092] [geo-rep]: Worker crashes seen while renaming directories in loop
bugzilla at redhat.com
- [Bugs] [Bug 1399470] Wrong value in Last Synced column during Hybrid Crawl
bugzilla at redhat.com
- [Bugs] [Bug 1399090] [geo-rep]: Worker crashes seen while renaming directories in loop
bugzilla at redhat.com
- [Bugs] [Bug 1399092] [geo-rep]: Worker crashes seen while renaming directories in loop
bugzilla at redhat.com
- [Bugs] [Bug 1399470] Wrong value in Last Synced column during Hybrid Crawl
bugzilla at redhat.com
- [Bugs] [Bug 1400845] New: JSON output for all Events CLI commands
bugzilla at redhat.com
- [Bugs] [Bug 1400845] JSON output for all Events CLI commands
bugzilla at redhat.com
- [Bugs] [Bug 1393678] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1399592] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1400573] Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1399592] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1395660] Checkpoint completed event missing master node detail
bugzilla at redhat.com
- [Bugs] [Bug 1394482] A hot tier brick becomes full, causing the entire volume to have issues and returns stale file handle and input /output error.
bugzilla at redhat.com
- [Bugs] [Bug 1395660] Checkpoint completed event missing master node detail
bugzilla at redhat.com
- [Bugs] [Bug 1395660] Checkpoint completed event missing master node detail
bugzilla at redhat.com
- [Bugs] [Bug 1400923] New: Checkpoint completed event missing master node detail
bugzilla at redhat.com
- [Bugs] [Bug 1400923] Checkpoint completed event missing master node detail
bugzilla at redhat.com
- [Bugs] [Bug 1400923] Checkpoint completed event missing master node detail
bugzilla at redhat.com
- [Bugs] [Bug 1400924] New: [RFE] Rsync flags for performance improvements
bugzilla at redhat.com
- [Bugs] [Bug 1400924] [RFE] Rsync flags for performance improvements
bugzilla at redhat.com
- [Bugs] [Bug 1400924] [RFE] Rsync flags for performance improvements
bugzilla at redhat.com
- [Bugs] [Bug 1400924] [RFE] Rsync flags for performance improvements
bugzilla at redhat.com
- [Bugs] [Bug 1399592] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1399592] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1400926] New: Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1400926] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1400926] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1399592] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1400926] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1400927] New: Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1400927] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1400927] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1392713] inconsistent file permissions b/ w write permission and sticky bits(---------T ) displayed when IOs are going on with md-cache enabled ( and within the invalidation cycle)
bugzilla at redhat.com
- [Bugs] [Bug 1392713] inconsistent file permissions b/ w write permission and sticky bits(---------T ) displayed when IOs are going on with md-cache enabled ( and within the invalidation cycle)
bugzilla at redhat.com
- [Bugs] [Bug 892808] [FEAT] Bring subdirectory mount option with native client
bugzilla at redhat.com
- [Bugs] [Bug 1378842] [RFE] 'gluster volume get' should implement the way to retrieve volume options using the volume name 'all'
bugzilla at redhat.com
- [Bugs] [Bug 892808] [FEAT] Bring subdirectory mount option with native client
bugzilla at redhat.com
- [Bugs] [Bug 1398381] Hole punch does not report correct size
bugzilla at redhat.com
- [Bugs] [Bug 1397052] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1397052] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1399186] [GANESHA] Export ID changed during volume start and stop with message " lookup_export failed with Export id not found" in ganesha.log
bugzilla at redhat.com
- [Bugs] [Bug 1399186] [GANESHA] Export ID changed during volume start and stop with message " lookup_export failed with Export id not found" in ganesha.log
bugzilla at redhat.com
- [Bugs] [Bug 1401011] New: [GANESHA] Export ID changed during volume start and stop with message " lookup_export failed with Export id not found" in ganesha.log
bugzilla at redhat.com
- [Bugs] [Bug 1401011] [GANESHA] Export ID changed during volume start and stop with message " lookup_export failed with Export id not found" in ganesha.log
bugzilla at redhat.com
- [Bugs] [Bug 1401011] [GANESHA] Export ID changed during volume start and stop with message " lookup_export failed with Export id not found" in ganesha.log
bugzilla at redhat.com
- [Bugs] [Bug 1399186] [GANESHA] Export ID changed during volume start and stop with message " lookup_export failed with Export id not found" in ganesha.log
bugzilla at redhat.com
- [Bugs] [Bug 1401011] [GANESHA] Export ID changed during volume start and stop with message " lookup_export failed with Export id not found" in ganesha.log
bugzilla at redhat.com
- [Bugs] [Bug 1401016] New: [GANESHA] Export ID changed during volume start and stop with message " lookup_export failed with Export id not found" in ganesha.log
bugzilla at redhat.com
- [Bugs] [Bug 1397052] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401021] New: OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1397052] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401021] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401023] New: OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401023] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401023] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1397052] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401021] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401023] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401029] New: OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401029] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401029] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1397052] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401021] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401023] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401029] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401032] New: OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401032] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401032] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1398554] Rename is failing with ENOENT while remove-brick start operation is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1400818] possible memory leak on client when writing to a file while another client issues a truncate
bugzilla at redhat.com
- [Bugs] [Bug 1381970] GlusterFS Daemon stops working after a longer runtime and higher file workload due to design flaws ?
bugzilla at redhat.com
- [Bugs] [Bug 1381970] GlusterFS Daemon stops working after a longer runtime and higher file workload due to design flaws ?
bugzilla at redhat.com
- [Bugs] [Bug 1381970] GlusterFS Daemon stops working after a longer runtime and higher file workload due to design flaws ?
bugzilla at redhat.com
- [Bugs] [Bug 1381970] GlusterFS Daemon stops working after a longer runtime and higher file workload due to design flaws ?
bugzilla at redhat.com
- [Bugs] [Bug 1381970] GlusterFS Daemon stops working after a longer runtime and higher file workload due to design flaws ?
bugzilla at redhat.com
- [Bugs] [Bug 1381970] GlusterFS Daemon stops working after a longer runtime and higher file workload due to design flaws ?
bugzilla at redhat.com
- [Bugs] [Bug 1386626] fuse mount point not accessible
bugzilla at redhat.com
- [Bugs] [Bug 1381970] GlusterFS Daemon stops working after a longer runtime and higher file workload due to design flaws ?
bugzilla at redhat.com
- [Bugs] [Bug 1381970] GlusterFS Daemon stops working after a longer runtime and higher file workload due to design flaws ?
bugzilla at redhat.com
- [Bugs] [Bug 1401095] New: log the error when locking the brick directory fails
bugzilla at redhat.com
- [Bugs] [Bug 1401095] log the error when locking the brick directory fails
bugzilla at redhat.com
- [Bugs] [Bug 1401095] log the error when locking the brick directory fails
bugzilla at redhat.com
- [Bugs] [Bug 1401095] log the error when locking the brick directory fails
bugzilla at redhat.com
- [Bugs] [Bug 1395745] bitrot quarantine dir misspelled
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1401122] New: atime becomes zero when truncating file via ganesha ( or gluster-NFS)
bugzilla at redhat.com
- [Bugs] [Bug 1368138] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1401218] New: Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1401218] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1401218] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1401218] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1401261] New: Delayed Events if any one Webhook is slow
bugzilla at redhat.com
- [Bugs] [Bug 1401261] Delayed Events if any one Webhook is slow
bugzilla at redhat.com
- [Bugs] [Bug 1401218] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1400926] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1400927] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1401218] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1400802] glusterfs_ctx_defaults_init is re-initializing ctx->locks
bugzilla at redhat.com
- [Bugs] [Bug 1400927] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1400926] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1376464] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1392713] inconsistent file permissions b/ w write permission and sticky bits(---------T ) displayed when IOs are going on with md-cache enabled ( and within the invalidation cycle)
bugzilla at redhat.com
- [Bugs] [Bug 1401376] New: inconsistent file permissions b/ w write permission and sticky bits(---------T ) displayed when IOs are going on with md-cache enabled ( and within the invalidation cycle)
bugzilla at redhat.com
- [Bugs] [Bug 1401376] inconsistent file permissions b/ w write permission and sticky bits(---------T ) displayed when IOs are going on with md-cache enabled ( and within the invalidation cycle)
bugzilla at redhat.com
- [Bugs] [Bug 1401218] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1376694] Disable gluster-NFS post upgrade to >= RHGS 3.2 release
bugzilla at redhat.com
- [Bugs] [Bug 1401011] [GANESHA] Export ID changed during volume start and stop with message " lookup_export failed with Export id not found" in ganesha.log
bugzilla at redhat.com
- [Bugs] [Bug 1399635] Refresh config fails while exporting subdirectories within a volume
bugzilla at redhat.com
- [Bugs] [Bug 1401404] New: [Arbiter] IO's Halted and heal info command hung
bugzilla at redhat.com
- [Bugs] [Bug 1401404] [Arbiter] IO's Halted and heal info command hung
bugzilla at redhat.com
- [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool
bugzilla at redhat.com
- [Bugs] [Bug 1397177] memory leak when using libgfapi
bugzilla at redhat.com
- [Bugs] [Bug 1401404] [Arbiter] IO's Halted and heal info command hung
bugzilla at redhat.com
- [Bugs] [Bug 1397177] memory leak when using libgfapi
bugzilla at redhat.com
- [Bugs] [Bug 1401404] [Arbiter] IO's Halted and heal info command hung
bugzilla at redhat.com
- [Bugs] [Bug 1386626] fuse mount point not accessible
bugzilla at redhat.com
- [Bugs] [Bug 1386626] fuse mount point not accessible
bugzilla at redhat.com
- [Bugs] [Bug 1376695] Provide a prompt when enabling gluster-NFS
bugzilla at redhat.com
- [Bugs] [Bug 1389422] SMB[md-cache Private Build]: Error messages in brick logs related to upcall_cache_invalidate gf_uuid_is_null
bugzilla at redhat.com
- [Bugs] [Bug 1390843] write-behind: flush stuck by former failed write
bugzilla at redhat.com
- [Bugs] [Bug 1389422] SMB[md-cache Private Build]: Error messages in brick logs related to upcall_cache_invalidate gf_uuid_is_null
bugzilla at redhat.com
- [Bugs] [Bug 1369364] Huge memory usage of FUSE client
bugzilla at redhat.com
- [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool
bugzilla at redhat.com
- [Bugs] [Bug 1397406] glfs_fini does not send parent down on inactive graphs.
bugzilla at redhat.com
- [Bugs] [Bug 1397177] memory leak when using libgfapi
bugzilla at redhat.com
- [Bugs] [Bug 1397177] memory leak when using libgfapi
bugzilla at redhat.com
- [Bugs] [Bug 1397406] glfs_fini does not send parent down on inactive graphs.
bugzilla at redhat.com
- [Bugs] [Bug 1397795] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1386626] fuse mount point not accessible
bugzilla at redhat.com
- [Bugs] [Bug 1386626] fuse mount point not accessible
bugzilla at redhat.com
- [Bugs] [Bug 1388323] fuse mount point not accessible
bugzilla at redhat.com
- [Bugs] [Bug 1401534] New: fuse mount point not accessible
bugzilla at redhat.com
- [Bugs] [Bug 1401534] fuse mount point not accessible
bugzilla at redhat.com
- [Bugs] [Bug 1388323] fuse mount point not accessible
bugzilla at redhat.com
- [Bugs] [Bug 1401534] fuse mount point not accessible
bugzilla at redhat.com
- [Bugs] [Bug 1388323] fuse mount point not accessible
bugzilla at redhat.com
- [Bugs] [Bug 1386626] fuse mount point not accessible
bugzilla at redhat.com
- [Bugs] [Bug 1397406] glfs_fini does not send parent down on inactive graphs.
bugzilla at redhat.com
- [Bugs] [Bug 1397177] memory leak when using libgfapi
bugzilla at redhat.com
- [Bugs] [Bug 1395745] bitrot quarantine dir misspelled
bugzilla at redhat.com
- [Bugs] [Bug 1401571] New: bitrot quarantine dir misspelled
bugzilla at redhat.com
- [Bugs] [Bug 1401571] bitrot quarantine dir misspelled
bugzilla at redhat.com
- [Bugs] [Bug 1401571] bitrot quarantine dir misspelled
bugzilla at redhat.com
- [Bugs] [Bug 1401571] bitrot quarantine dir misspelled
bugzilla at redhat.com
- [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool
bugzilla at redhat.com
- [Bugs] [Bug 1401218] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1401218] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1401597] New: AFR fix locking bug in self-heal code path
bugzilla at redhat.com
- [Bugs] [Bug 1401597] AFR fix locking bug in self-heal code path
bugzilla at redhat.com
- [Bugs] [Bug 1399134] GlusterFS client crashes during remove-brick operation
bugzilla at redhat.com
- [Bugs] [Bug 1399134] GlusterFS client crashes during remove-brick operation
bugzilla at redhat.com
- [Bugs] [Bug 1396328] after write some files into the gluster DFS volume successful, when we read one of these files will warn cannot get the file stat, but other file can read
bugzilla at redhat.com
- [Bugs] [Bug 1388509] gluster volume heal info "healed" and "heal-failed" showing wrong information
bugzilla at redhat.com
- [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool
bugzilla at redhat.com
- [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool
bugzilla at redhat.com
- [Bugs] [Bug 1401122] atime becomes zero when truncating file via ganesha ( or gluster-NFS)
bugzilla at redhat.com
- [Bugs] [Bug 1397085] GlusterFS does not work well with MS Office 2010 and Samba " posix locking = yes".
bugzilla at redhat.com
- [Bugs] [Bug 1401122] atime becomes zero when truncating file via ganesha ( or gluster-NFS)
bugzilla at redhat.com
- [Bugs] [Bug 1401122] atime becomes zero when truncating file via ganesha ( or gluster-NFS)
bugzilla at redhat.com
- [Bugs] [Bug 1401777] New: atime becomes zero when truncating file via ganesha ( or gluster-NFS)
bugzilla at redhat.com
- [Bugs] [Bug 1389742] build: incorrect Requires: for portblock resource agent
bugzilla at redhat.com
- [Bugs] [Bug 1350744] GlusterFS 3.9.0 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1401777] atime becomes zero when truncating file via ganesha ( or gluster-NFS)
bugzilla at redhat.com
- [Bugs] [Bug 1395649] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1395652] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1389742] build: incorrect Requires: for portblock resource agent
bugzilla at redhat.com
- [Bugs] [Bug 1336853] scripts: bash-isms in scripts
bugzilla at redhat.com
- [Bugs] [Bug 1336854] scripts: bash-isms in scripts
bugzilla at redhat.com
- [Bugs] [Bug 1374278] rpc/xdr: generated files are filtered with a sed extended regex
bugzilla at redhat.com
- [Bugs] [Bug 1336197] failover is not working with latest builds.
bugzilla at redhat.com
- [Bugs] [Bug 1336198] failover is not working with latest builds.
bugzilla at redhat.com
- [Bugs] [Bug 1336199] failover is not working with latest builds.
bugzilla at redhat.com
- [Bugs] [Bug 1356998] syscalls: readdir_r() is deprecated in newer glibc
bugzilla at redhat.com
- [Bugs] [Bug 1337650] log flooded with Could not map name= xxxx to a UUID when config'd with long hostnames
bugzilla at redhat.com
- [Bugs] [Bug 1337652] log flooded with Could not map name= xxxx to a UUID when config'd with long hostnames
bugzilla at redhat.com
- [Bugs] [Bug 1337653] log flooded with Could not map name= xxxx to a UUID when config'd with long hostnames
bugzilla at redhat.com
- [Bugs] [Bug 1350793] build: remove absolute paths from glusterfs spec file
bugzilla at redhat.com
- [Bugs] [Bug 1351711] build: remove absolute paths from glusterfs spec file
bugzilla at redhat.com
- [Bugs] [Bug 1341768] After setting up ganesha on RHEL 6, nodes remains in stopped state and grace related failures observed in pcs status
bugzilla at redhat.com
- [Bugs] [Bug 1341770] After setting up ganesha on RHEL 6, nodes remains in stopped state and grace related failures observed in pcs status
bugzilla at redhat.com
- [Bugs] [Bug 1341772] After setting up ganesha on RHEL 6, nodes remains in stopped state and grace related failures observed in pcs status
bugzilla at redhat.com
- [Bugs] [Bug 1341294] build: RHEL7 unpackaged files /var/lib/glusterd/hooks/.../ S57glusterfind-delete-post.{pyc, pyo}
bugzilla at redhat.com
- [Bugs] [Bug 1341295] build: RHEL7 unpackaged files /var/lib/glusterd/hooks/.../ S57glusterfind-delete-post.{pyc, pyo}
bugzilla at redhat.com
- [Bugs] [Bug 1341296] build: RHEL7 unpackaged files /var/lib/glusterd/hooks/.../ S57glusterfind-delete-post.{pyc, pyo}
bugzilla at redhat.com
- [Bugs] [Bug 1333925] libglusterfs: race conditions and illegal mem access in timer
bugzilla at redhat.com
- [Bugs] [Bug 1342620] libglusterfs: race conditions and illegal mem access in timer
bugzilla at redhat.com
- [Bugs] [Bug 1336945] [NFS-Ganesha] : stonith-enabled option not set with new versions of cman, pacemaker, corosync and pcs
bugzilla at redhat.com
- [Bugs] [Bug 1336947] [NFS-Ganesha] : stonith-enabled option not set with new versions of cman, pacemaker, corosync and pcs
bugzilla at redhat.com
- [Bugs] [Bug 1336948] [NFS-Ganesha] : stonith-enabled option not set with new versions of cman, pacemaker, corosync and pcs
bugzilla at redhat.com
- [Bugs] [Bug 1373529] Node remains in stopped state in pcs status with "/usr/lib/ ocf/resource.d/heartbeat/ganesha_mon: line 137: [: too many arguments ]" messages in logs.
bugzilla at redhat.com
- [Bugs] [Bug 1338967] common-ha: ganesha.nfsd not put into NFS-GRACE after fail-back
bugzilla at redhat.com
- [Bugs] [Bug 1338968] common-ha: ganesha.nfsd not put into NFS-GRACE after fail-back
bugzilla at redhat.com
- [Bugs] [Bug 1338969] common-ha: ganesha.nfsd not put into NFS-GRACE after fail-back
bugzilla at redhat.com
- [Bugs] [Bug 1388579] crypt: changes needed for openssl-1.1 (coming in Fedora 26)
bugzilla at redhat.com
- [Bugs] [Bug 1350744] GlusterFS 3.9.0 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1388580] crypt: changes needed for openssl-1.1 (coming in Fedora 26)
bugzilla at redhat.com
- [Bugs] [Bug 1395652] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1395652] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1400572] Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1401777] atime becomes zero when truncating file via ganesha ( or gluster-NFS)
bugzilla at redhat.com
- [Bugs] [Bug 1336793] assorted typos and spelling mistakes from Debian lintian
bugzilla at redhat.com
- [Bugs] [Bug 1215420] Spelling errors again
bugzilla at redhat.com
- [Bugs] [Bug 1336794] assorted typos and spelling mistakes from Debian lintian
bugzilla at redhat.com
- [Bugs] [Bug 1347354] glusterd: SuSE build system error for incorrect strcat, strncat usage
bugzilla at redhat.com
- [Bugs] [Bug 1347355] glusterd: SuSE build system error for incorrect strcat, strncat usage
bugzilla at redhat.com
- [Bugs] [Bug 1395649] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1395652] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1383913] spurious heal info as pending heal entries never end on an EC volume while IOs are going on
bugzilla at redhat.com
- [Bugs] [Bug 1385451] "nfs.disable: on" is not showing in Vol info by default for the 3.7.x volumes after updating to 3.9.0
bugzilla at redhat.com
- [Bugs] [Bug 1350744] GlusterFS 3.9.0 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1374639] glusterfs: create a directory with 0464 mode return EIO error
bugzilla at redhat.com
- [Bugs] [Bug 1374579] Geo-rep worker Faulty with OSError: [Errno 21] Is a directory
bugzilla at redhat.com
- [Bugs] [Bug 1374581] Geo-rep worker Faulty with OSError: [Errno 21] Is a directory
bugzilla at redhat.com
- [Bugs] [Bug 1388563] [Eventing]: 'VOLUME_REBALANCE' event messages have an incorrect volume name
bugzilla at redhat.com
- [Bugs] [Bug 1364529] api: revert glfs_ipc_xd intended for 4.0
bugzilla at redhat.com
- [Bugs] [Bug 1350744] GlusterFS 3.9.0 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1370931] glfs_realpath() should not return malloc()' d allocated memory
bugzilla at redhat.com
- [Bugs] [Bug 1374153] [RFE] History Crawl performance improvement
bugzilla at redhat.com
- [Bugs] [Bug 1364421] [RFE] History Crawl performance improvement
bugzilla at redhat.com
- [Bugs] [Bug 1365119] [RFE] History Crawl performance improvement
bugzilla at redhat.com
- [Bugs] [Bug 1350744] GlusterFS 3.9.0 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1387894] Regression caused by enabling client-io-threads by default
bugzilla at redhat.com
- [Bugs] [Bug 1359613] [RFE] Geo-replication Logging Improvements
bugzilla at redhat.com
- [Bugs] [Bug 1387990] [RFE] Geo-replication Logging Improvements
bugzilla at redhat.com
- [Bugs] [Bug 1388731] [GSS] glusterfind pre session hangs indefinitely in RHGS 3.1.3
bugzilla at redhat.com
- [Bugs] [Bug 1376477] [RFE] DHT Events
bugzilla at redhat.com
- [Bugs] [Bug 1387564] [Eventing]: UUID is showing zeros in the event message for the peer probe operation.
bugzilla at redhat.com
- [Bugs] [Bug 1379028] Modifications to AFR Events
bugzilla at redhat.com
- [Bugs] [Bug 1378300] Modifications to AFR Events
bugzilla at redhat.com
- [Bugs] [Bug 1377386] glusterd experiencing repeated connect/ disconnect messages when shd is down
bugzilla at redhat.com
- [Bugs] [Bug 1373723] glusterd experiencing repeated connect/ disconnect messages when shd is down
bugzilla at redhat.com
- [Bugs] [Bug 1378130] glusterd experiencing repeated connect/ disconnect messages when shd is down
bugzilla at redhat.com
- [Bugs] [Bug 1378814] Files not being opened with o_direct flag during random read operation ( Glusterfs 3.8.2)
bugzilla at redhat.com
- [Bugs] [Bug 1377556] Files not being opened with o_direct flag during random read operation ( Glusterfs 3.8.2)
bugzilla at redhat.com
- [Bugs] [Bug 1380638] Files not being opened with o_direct flag during random read operation ( Glusterfs 3.8.2)
bugzilla at redhat.com
- [Bugs] [Bug 1378695] Files not being opened with o_direct flag during random read operation ( Glusterfs 3.8.2)
bugzilla at redhat.com
- [Bugs] [Bug 1386072] Spurious permission denied problems observed
bugzilla at redhat.com
- [Bugs] [Bug 1373740] [RFE]: events from protocol server
bugzilla at redhat.com
- [Bugs] [Bug 1373743] [RFE]: AFR events
bugzilla at redhat.com
- [Bugs] [Bug 1374324] [RFE] Tier Events
bugzilla at redhat.com
- [Bugs] [Bug 1387975] Continuous warning messages getting when one of the cluster node is down on SSL setup.
bugzilla at redhat.com
- [Bugs] [Bug 1386450] Continuous warning messages getting when one of the cluster node is down on SSL setup.
bugzilla at redhat.com
- [Bugs] [Bug 1374597] [geo-rep]: AttributeError: 'Popen' object has no attribute ' elines'
bugzilla at redhat.com
- [Bugs] [Bug 1379287] warning messages seen in glusterd logs for each ' gluster volume status' command
bugzilla at redhat.com
- [Bugs] [Bug 1379284] warning messages seen in glusterd logs for each ' gluster volume status' command
bugzilla at redhat.com
- [Bugs] [Bug 1374630] [geo-replication]: geo-rep Status is not showing bricks from one of the nodes
bugzilla at redhat.com
- [Bugs] [Bug 1372686] [RFE]Reducing number of network round trips
bugzilla at redhat.com
- [Bugs] [Bug 1360978] [RFE]Reducing number of network round trips
bugzilla at redhat.com
- [Bugs] [Bug 1388150] geo-replica slave node goes faulty for non-root user session due to fail to locate gluster binary
bugzilla at redhat.com
- [Bugs] [Bug 1377288] The GlusterFS Callback RPC-calls always use RPC/XID 42
bugzilla at redhat.com
- [Bugs] [Bug 1374626] Worker crashes with EINVAL errors
bugzilla at redhat.com
- [Bugs] [Bug 1387964] [Eventing]: 'gluster vol bitrot <volname> scrub ondemand' does not produce an event
bugzilla at redhat.com
- [Bugs] [Bug 1386338] pmap_signin event fails to update brickinfo->signed_in flag
bugzilla at redhat.com
- [Bugs] [Bug 1386538] pmap_signin event fails to update brickinfo->signed_in flag
bugzilla at redhat.com
- [Bugs] [Bug 1376874] RFE : move ganesha related configuration into shared storage
bugzilla at redhat.com
- [Bugs] [Bug 1376396] /var/tmp/rpm-tmp.KPCugR: line 2: /bin/systemctl: No such file or directory
bugzilla at redhat.com
- [Bugs] [Bug 1372278] [RFE] Provide snapshot events for the new eventing framework
bugzilla at redhat.com
- [Bugs] [Bug 1375042] bug-963541.t spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1375045] bug-963541.t spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1375570] Detach tier commit is allowed when detach tier start goes into failed state
bugzilla at redhat.com
- [Bugs] [Bug 1374298] "gluster vol status all clients --xml" doesn' t generate xml if there is a failure in between
bugzilla at redhat.com
- [Bugs] [Bug 1374608] geo-replication *changes.log does not respect the log-level configured
bugzilla at redhat.com
- [Bugs] [Bug 1375914] posix: Integrate important events with events framework
bugzilla at redhat.com
- [Bugs] [Bug 1375125] arbiter volume write performance is bad.
bugzilla at redhat.com
- [Bugs] [Bug 1385224] arbiter volume write performance is bad with sharding
bugzilla at redhat.com
- [Bugs] [Bug 1375543] [geo-rep]: defunct tar process while using tar+ssh sync
bugzilla at redhat.com
- [Bugs] [Bug 1385236] invalid argument warning messages seen in fuse client logs 2016-09-30 06: 34:58.938667] W [dict.c:418ict_set] (-->/usr/lib64/glusterfs/3.8.4/xlator/ cluster/replicate.so(+0x58722) 0-dict: !this || !value for key= link-count [Invalid argument]
bugzilla at redhat.com
- [Bugs] [Bug 1385442] invalid argument warning messages seen in fuse client logs 2016-09-30 06: 34:58.938667] W [dict.c:418ict_set] (-->/usr/lib64/glusterfs/3.8.4/xlator/ cluster/replicate.so(+0x58722) 0-dict: !this || !value for key= link-count [Invalid argument]
bugzilla at redhat.com
- [Bugs] [Bug 1387492] Error and warning message getting while removing glusterfs-events package
bugzilla at redhat.com
- [Bugs] [Bug 1386160] Error and warning message getting while removing glusterfs-events package
bugzilla at redhat.com
- [Bugs] [Bug 1387502] Incorrect volume type in the "glusterd_state" file generated using CLI "gluster get-state"
bugzilla at redhat.com
- [Bugs] [Bug 1353427] [RFE] CLI to get local state representation for a cluster
bugzilla at redhat.com
- [Bugs] [Bug 1387984] Add a test script for compound fops changes in AFR
bugzilla at redhat.com
- [Bugs] [Bug 1376464] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1397257] capture volume tunables in get-state dump
bugzilla at redhat.com
- [Bugs] [Bug 1377062] /var/tmp/rpm-tmp.KPCugR: line 2: /bin/systemctl: No such file or directory
bugzilla at redhat.com
- [Bugs] [Bug 1376464] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1397257] capture volume tunables in get-state dump
bugzilla at redhat.com
- [Bugs] [Bug 1377062] /var/tmp/rpm-tmp.KPCugR: line 2: /bin/systemctl: No such file or directory
bugzilla at redhat.com
- [Bugs] [Bug 1401777] atime becomes zero when truncating file via ganesha ( or gluster-NFS)
bugzilla at redhat.com
- [Bugs] [Bug 1401777] atime becomes zero when truncating file via ganesha ( or gluster-NFS)
bugzilla at redhat.com
- [Bugs] [Bug 1401800] high CPU consumption by glusterfs process
bugzilla at redhat.com
- [Bugs] [Bug 1401801] New: [RFE] Use Host UUID to find local nodes to spawn workers
bugzilla at redhat.com
- [Bugs] [Bug 1401801] [RFE] Use Host UUID to find local nodes to spawn workers
bugzilla at redhat.com
- [Bugs] [Bug 1401801] [RFE] Use Host UUID to find local nodes to spawn workers
bugzilla at redhat.com
- [Bugs] [Bug 1401801] [RFE] Use Host UUID to find local nodes to spawn workers
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1169302] Unable to take Statedump for gfapi applications
bugzilla at redhat.com
- [Bugs] [Bug 1379673] Creation of file hangs while doing ls from another mount.
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1401812] New: RFE: Make readdirp parallel in dht
bugzilla at redhat.com
- [Bugs] [Bug 1401812] RFE: Make readdirp parallel in dht
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1401822] New: [GANESHA] Unable to export the ganesha volume after doing volume start and stop
bugzilla at redhat.com
- [Bugs] [Bug 1401822] [GANESHA] Unable to export the ganesha volume after doing volume start and stop
bugzilla at redhat.com
- [Bugs] [Bug 1401822] [GANESHA] Unable to export the ganesha volume after doing volume start and stop
bugzilla at redhat.com
- [Bugs] [Bug 1401836] New: update documentation to readthedocs.io
bugzilla at redhat.com
- [Bugs] [Bug 1401836] update documentation to readthedocs.io
bugzilla at redhat.com
- [Bugs] [Bug 1401836] update documentation to readthedocs.io
bugzilla at redhat.com
- [Bugs] [Bug 1401877] New: [GANESHA] Symlinks from /etc/ganesha/ ganesha.conf to shared_storage are created on the non-ganesha nodes in 8 node gluster having 4 node ganesha cluster
bugzilla at redhat.com
- [Bugs] [Bug 1313838] Tiering as separate process and in v status moving tier task to tier process
bugzilla at redhat.com
- [Bugs] [Bug 1401877] [GANESHA] Symlinks from /etc/ganesha/ ganesha.conf to shared_storage are created on the non-ganesha nodes in 8 node gluster having 4 node ganesha cluster
bugzilla at redhat.com
- [Bugs] [Bug 1401812] RFE: Make readdirp parallel in dht
bugzilla at redhat.com
- [Bugs] [Bug 1401261] Delayed Events if any one Webhook is slow
bugzilla at redhat.com
- [Bugs] [Bug 1401218] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1401218] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1401921] New: glusterfsd crashed while taking snapshot using scheduler
bugzilla at redhat.com
- [Bugs] [Bug 1399584] Memory leaks after glfs_new+set_volfile_server+init+fini
bugzilla at redhat.com
- [Bugs] [Bug 1401921] glusterfsd crashed while taking snapshot using scheduler
bugzilla at redhat.com
- [Bugs] [Bug 1401800] high CPU consumption by glusterfs process
bugzilla at redhat.com
- [Bugs] [Bug 1401800] high CPU consumption by glusterfs process
bugzilla at redhat.com
- [Bugs] [Bug 1401921] glusterfsd crashed while taking snapshot using scheduler
bugzilla at redhat.com
- [Bugs] [Bug 1369077] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1369077] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1401011] [GANESHA] Reexporting a volume during volume start and stop ends up with different export id in one node
bugzilla at redhat.com
- [Bugs] [Bug 1402172] New: Peer unexpectedly disconnected
bugzilla at redhat.com
- [Bugs] [Bug 1401023] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1402172] Peer unexpectedly disconnected
bugzilla at redhat.com
- [Bugs] [Bug 1401218] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1402212] New: Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1402212] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1402212] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1402212] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1401921] glusterfsd crashed while taking snapshot using scheduler
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1375849] [RFE] enable sharding with virt profile - /var/lib/glusterd/ groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1376464] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1402215] New: [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1402215] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1402215] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1375431] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1375849] [RFE] enable sharding with virt profile - /var/lib/glusterd/ groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1376464] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1402215] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1402216] New: [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1402216] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1388509] gluster volume heal info "healed" and "heal-failed" showing wrong information
bugzilla at redhat.com
- [Bugs] [Bug 1397795] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1198849] Minor improvements and cleanup for the build system
bugzilla at redhat.com
- [Bugs] [Bug 1402237] New: Bad spacing in error message in cli
bugzilla at redhat.com
- [Bugs] [Bug 1402237] Bad spacing in error message in cli
bugzilla at redhat.com
- [Bugs] [Bug 1402237] Bad spacing in error message in cli
bugzilla at redhat.com
- [Bugs] [Bug 1369077] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1402254] New: compile warning unused variable
bugzilla at redhat.com
- [Bugs] [Bug 1402254] compile warning unused variable
bugzilla at redhat.com
- [Bugs] [Bug 1402254] compile warning unused variable
bugzilla at redhat.com
- [Bugs] [Bug 1402261] New: cli: compile warnings (unused var) if building without bd xlator
bugzilla at redhat.com
- [Bugs] [Bug 1402261] cli: compile warnings (unused var) if building without bd xlator
bugzilla at redhat.com
- [Bugs] [Bug 1402261] cli: compile warnings (unused var) if building without bd xlator
bugzilla at redhat.com
- [Bugs] [Bug 1401812] RFE: Make readdirp parallel in dht
bugzilla at redhat.com
- [Bugs] [Bug 1402237] Bad spacing in error message in cli
bugzilla at redhat.com
- [Bugs] [Bug 1393694] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1402296] New: Enabling Quota on volumes leads inconsistent data
bugzilla at redhat.com
- [Bugs] [Bug 1402297] New: Enabling Quota on volumes leads inconsistent data
bugzilla at redhat.com
- [Bugs] [Bug 1401877] [GANESHA] Symlinks from /etc/ganesha/ ganesha.conf to shared_storage are created on the non-ganesha nodes in 8 node gluster having 4 node ganesha cluster
bugzilla at redhat.com
- [Bugs] [Bug 1401404] [Arbiter] IO's Halted and heal info command hung
bugzilla at redhat.com
- [Bugs] [Bug 1392347] RFE for glusterfind
bugzilla at redhat.com
- [Bugs] [Bug 1397795] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1402366] New: NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1402369] New: Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402369] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402369] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1313838] Tiering as separate process and in v status moving tier task to tier process
bugzilla at redhat.com
- [Bugs] [Bug 1368138] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1368138] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1402366] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1398859] tier: performance improvement with one thread per brick
bugzilla at redhat.com
- [Bugs] [Bug 1402366] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1402406] New: Client stale file handle error in dht-linkfile.c under SPEC SFS 2014 VDA workload
bugzilla at redhat.com
- [Bugs] [Bug 1397795] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1402406] Client stale file handle error in dht-linkfile.c under SPEC SFS 2014 VDA workload
bugzilla at redhat.com
- [Bugs] [Bug 1402406] Client stale file handle error in dht-linkfile.c under SPEC SFS 2014 VDA workload
bugzilla at redhat.com
- [Bugs] [Bug 1395517] Seeing error messages [snapview-client.c:283: gf_svc_lookup_cbk] and [dht-helper.c:1666ht_inode_ctx_time_update] (-->/ usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x5d75c)
bugzilla at redhat.com
- [Bugs] [Bug 1398859] tier: performance improvement with one thread per brick
bugzilla at redhat.com
- [Bugs] [Bug 1393466] Request branch creation for FB commits and addition of merge rights for specific users
bugzilla at redhat.com
- [Bugs] [Bug 1398859] tier: performance improvement with multi-threaded brick processing
bugzilla at redhat.com
- [Bugs] [Bug 1402172] Peer unexpectedly disconnected
bugzilla at redhat.com
- [Bugs] [Bug 1402172] Peer unexpectedly disconnected
bugzilla at redhat.com
- [Bugs] [Bug 1369077] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1369077] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1393694] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1393694] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1402482] New: The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1396779] heal info --xml when bricks are down in a systemic environment is not displaying anything even after more than 30minutes
bugzilla at redhat.com
- [Bugs] [Bug 1402482] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1402482] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1402172] Peer unexpectedly disconnected
bugzilla at redhat.com
- [Bugs] [Bug 1396780] Make debugging EACCES errors easier to debug
bugzilla at redhat.com
- [Bugs] [Bug 1396780] Make debugging EACCES errors easier to debug
bugzilla at redhat.com
- [Bugs] [Bug 1396780] Make debugging EACCES errors easier to debug
bugzilla at redhat.com
- [Bugs] [Bug 1402172] Peer unexpectedly disconnected
bugzilla at redhat.com
- [Bugs] [Bug 1402172] Peer unexpectedly disconnected
bugzilla at redhat.com
- [Bugs] [Bug 1402538] New: Assertion Failed Error messages in rebalance logs during rebalance
bugzilla at redhat.com
- [Bugs] [Bug 1402538] Assertion Failed Error messages in rebalance logs during rebalance
bugzilla at redhat.com
- [Bugs] [Bug 1402538] Assertion Failed Error messages in rebalance logs during rebalance
bugzilla at redhat.com
- [Bugs] [Bug 1402621] New: High load one node, gluster fuse clients hang, heal info does not complete
bugzilla at redhat.com
- [Bugs] [Bug 1402621] High load one node, gluster fuse clients hang, heal info does not complete
bugzilla at redhat.com
- [Bugs] [Bug 1402366] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1402538] Assertion Failed Error messages in rebalance logs during rebalance
bugzilla at redhat.com
- [Bugs] [Bug 1402621] High load one node, gluster fuse clients hang, heal info does not complete
bugzilla at redhat.com
- [Bugs] [Bug 1402661] New: Samba crash when mounting a distributed dispersed volume over CIFS
bugzilla at redhat.com
- [Bugs] [Bug 1402661] Samba crash when mounting a distributed dispersed volume over CIFS
bugzilla at redhat.com
- [Bugs] [Bug 1402661] Samba crash when mounting a distributed dispersed volume over CIFS
bugzilla at redhat.com
- [Bugs] [Bug 1388509] gluster volume heal info "healed" and "heal-failed" showing wrong information
bugzilla at redhat.com
- [Bugs] [Bug 1402661] Samba crash when mounting a distributed dispersed volume over CIFS
bugzilla at redhat.com
- [Bugs] [Bug 1397257] capture volume tunables in get-state dump
bugzilla at redhat.com
- [Bugs] [Bug 1393678] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1402369] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402369] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402369] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402671] New: Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402671] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402671] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402369] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402672] New: Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402672] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402672] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402621] High load one node, gluster fuse clients hang, heal info does not complete
bugzilla at redhat.com
- [Bugs] [Bug 1393678] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1388509] gluster volume heal info "healed" and "heal-failed" showing wrong information
bugzilla at redhat.com
- [Bugs] [Bug 1402482] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1402482] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1402688] New: "gluster get-state" is capturing the port number for the stopped state brick process.
bugzilla at redhat.com
- [Bugs] [Bug 1402688] "gluster get-state" is capturing the port number for the stopped state brick process.
bugzilla at redhat.com
- [Bugs] [Bug 1402688] "gluster get-state" is capturing the port number for the stopped state brick process.
bugzilla at redhat.com
- [Bugs] [Bug 1402688] "gluster get-state" is capturing the port number for the stopped state brick process.
bugzilla at redhat.com
- [Bugs] [Bug 1402688] "gluster get-state" is capturing the port number for the stopped state brick process.
bugzilla at redhat.com
- [Bugs] [Bug 1401921] glusterfsd crashed while taking snapshot using scheduler
bugzilla at redhat.com
- [Bugs] [Bug 1401921] glusterfsd crashed while taking snapshot using scheduler
bugzilla at redhat.com
- [Bugs] [Bug 1402694] New: glusterfsd crashed while taking snapshot using scheduler
bugzilla at redhat.com
- [Bugs] [Bug 1402694] glusterfsd crashed while taking snapshot using scheduler
bugzilla at redhat.com
- [Bugs] [Bug 1402694] glusterfsd crashed while taking snapshot using scheduler
bugzilla at redhat.com
- [Bugs] [Bug 1401921] glusterfsd crashed while taking snapshot using scheduler
bugzilla at redhat.com
- [Bugs] [Bug 1402694] glusterfsd crashed while taking snapshot using scheduler
bugzilla at redhat.com
- [Bugs] [Bug 1402697] New: glusterfsd crashed while taking snapshot using scheduler
bugzilla at redhat.com
- [Bugs] [Bug 1402697] glusterfsd crashed while taking snapshot using scheduler
bugzilla at redhat.com
- [Bugs] [Bug 1402697] glusterfsd crashed while taking snapshot using scheduler
bugzilla at redhat.com
- [Bugs] [Bug 1401921] glusterfsd crashed while taking snapshot using scheduler
bugzilla at redhat.com
- [Bugs] [Bug 1398859] tier: performance improvement with multi-threaded brick processing
bugzilla at redhat.com
- [Bugs] [Bug 1402672] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402671] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402710] New: ls and move hung on disperse volume
bugzilla at redhat.com
- [Bugs] [Bug 1402710] ls and move hung on disperse volume
bugzilla at redhat.com
- [Bugs] [Bug 1402172] Peer unexpectedly disconnected
bugzilla at redhat.com
- [Bugs] [Bug 1402710] ls and move hung on disperse volume
bugzilla at redhat.com
- [Bugs] [Bug 1402723] New: Log all published events
bugzilla at redhat.com
- [Bugs] [Bug 1402723] Log all published events
bugzilla at redhat.com
- [Bugs] [Bug 1393678] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1402727] New: Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1393678] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1402727] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1402728] New: Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1402727] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1402728] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1401812] RFE: Make readdirp parallel in dht
bugzilla at redhat.com
- [Bugs] [Bug 1402730] New: self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1402728] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1402727] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1402727] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1402728] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1401404] [Arbiter] IO's Halted and heal info command hung
bugzilla at redhat.com
- [Bugs] [Bug 1266876] cluster/afr: AFR2 returns empty readdir results to clients if brick is added back into cluster after re-imaging /formatting
bugzilla at redhat.com
- [Bugs] [Bug 1401812] RFE: Make readdirp parallel in dht
bugzilla at redhat.com
- [Bugs] [Bug 1401812] RFE: Make readdirp parallel in dht
bugzilla at redhat.com
- [Bugs] [Bug 1402828] New: Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1402828] Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1402841] New: Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1402841] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1402841] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1402841] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1402688] "gluster get-state" is capturing the port number for the stopped state brick process.
bugzilla at redhat.com
- [Bugs] [Bug 1402172] Peer unexpectedly disconnected
bugzilla at redhat.com
- [Bugs] [Bug 1402710] ls and move hung on disperse volume
bugzilla at redhat.com
- [Bugs] [Bug 1401822] [GANESHA] Unable to export the ganesha volume after doing volume start and stop
bugzilla at redhat.com
- [Bugs] [Bug 1402730] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1402730] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1402212] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1402212] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1399134] GlusterFS client crashes during remove-brick operation
bugzilla at redhat.com
- [Bugs] [Bug 1401095] log the error when locking the brick directory fails
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1399134] GlusterFS client crashes during remove-brick operation
bugzilla at redhat.com
- [Bugs] [Bug 1368138] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1363613] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1368138] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1403108] New: Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1402621] High load one node, gluster fuse clients hang, heal info does not complete
bugzilla at redhat.com
- [Bugs] [Bug 1403108] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1402621] High load one node, gluster fuse clients hang, heal info does not complete
bugzilla at redhat.com
- [Bugs] [Bug 1363613] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1368138] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1403108] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1403109] New: Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1403109] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1403109] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1402841] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1350867] RFE: FEATURE: Lock revocation for features/locks xlator
bugzilla at redhat.com
- [Bugs] [Bug 1403108] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1350867] RFE: FEATURE: Lock revocation for features/locks xlator
bugzilla at redhat.com
- [Bugs] [Bug 1403108] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1403109] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1403109] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1402841] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403120] New: Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403120] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1378547] Asynchronous Unsplit-brain still causes Input/ Output Error on system calls
bugzilla at redhat.com
- [Bugs] [Bug 1386188] Asynchronous Unsplit-brain still causes Input/ Output Error on system calls
bugzilla at redhat.com
- [Bugs] [Bug 1403121] New: Asynchronous Unsplit-brain still causes Input/ Output Error on system calls
bugzilla at redhat.com
- [Bugs] [Bug 1403121] Asynchronous Unsplit-brain still causes Input/ Output Error on system calls
bugzilla at redhat.com
- [Bugs] [Bug 1403121] Asynchronous Unsplit-brain still causes Input/ Output Error on system calls
bugzilla at redhat.com
- [Bugs] [Bug 1393316] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1403124] New: Gluster Block Storage CLI Integration
bugzilla at redhat.com
- [Bugs] [Bug 1403125] New: Gluster Block Storage CLI Integration
bugzilla at redhat.com
- [Bugs] [Bug 1403124] Gluster Block Storage CLI Integration
bugzilla at redhat.com
- [Bugs] [Bug 1403125] Gluster Block Storage CLI Integration
bugzilla at redhat.com
- [Bugs] [Bug 1402661] Samba crash when mounting a distributed dispersed volume over CIFS
bugzilla at redhat.com
- [Bugs] [Bug 1403130] [GANESHA] Adding a node to cluster failed to allocate resource-agents to new node.
bugzilla at redhat.com
- [Bugs] [Bug 1403130] [GANESHA] Adding a node to cluster failed to allocate resource-agents to new node.
bugzilla at redhat.com
- [Bugs] [Bug 1402730] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1403130] [GANESHA] Adding a node to cluster failed to allocate resource-agents to new node.
bugzilla at redhat.com
- [Bugs] [Bug 1378547] Asynchronous Unsplit-brain still causes Input/ Output Error on system calls
bugzilla at redhat.com
- [Bugs] [Bug 1378547] Asynchronous Unsplit-brain still causes Input/ Output Error on system calls
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1401822] [GANESHA] Unable to export the ganesha volume after doing volume start and stop
bugzilla at redhat.com
- [Bugs] [Bug 1403144] New: [GANESHA] Unable to export the ganesha volume after doing volume start and stop
bugzilla at redhat.com
- [Bugs] [Bug 1403144] [GANESHA] Unable to export the ganesha volume after doing volume start and stop
bugzilla at redhat.com
- [Bugs] [Bug 1403156] New: Memory leak on graph switch
bugzilla at redhat.com
- [Bugs] [Bug 1403156] Memory leak on graph switch
bugzilla at redhat.com
- [Bugs] [Bug 1402841] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1402841] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1402828] Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1402828] Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1402672] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402671] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1402671] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1403120] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1402841] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403120] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403187] New: Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403188] New: Snapshot : Snapshot restores fails, when nfs ganesha is disable and shared storage is down
bugzilla at redhat.com
- [Bugs] [Bug 1403188] Snapshot : Snapshot restores fails, when nfs ganesha is disable and shared storage is down
bugzilla at redhat.com
- [Bugs] [Bug 1403187] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403187] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403187] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1402841] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403120] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403187] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403192] New: Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403192] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403192] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403120] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1392347] RFE for glusterfind
bugzilla at redhat.com
- [Bugs] [Bug 1393694] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1403120] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403120] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403120] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1330079] [RFE] gluster vfs plugin should be able to make use of multiple volfile server feature of gfapi
bugzilla at redhat.com
- [Bugs] [Bug 1396880] refresh-config fails and crashes ganesha when mdcache is enabled on the volume.
bugzilla at redhat.com
- [Bugs] [Bug 1047975] glusterfs/extras: add a convenience script to label (selinux ) gluster bricks
bugzilla at redhat.com
- [Bugs] [Bug 1402212] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1402212] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1399780] Use standard refcounting for structures where possible
bugzilla at redhat.com
- [Bugs] [Bug 1400459] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1402037] GlusterFS - Server halts updateprocess ... AGAIN
bugzilla at redhat.com
- [Bugs] [Bug 1401836] update documentation to readthedocs.io
bugzilla at redhat.com
- [Bugs] [Bug 1401836] update documentation to readthedocs.io
bugzilla at redhat.com
- [Bugs] [Bug 1388861] build: python on Debian-based dists use .../lib/python2.7/ dist-packages instead of .../site-packages
bugzilla at redhat.com
- [Bugs] [Bug 1388861] build: python on Debian-based dists use .../lib/python2.7/ dist-packages instead of .../site-packages
bugzilla at redhat.com
- [Bugs] [Bug 1403577] New: GlusterFS 3.8.8 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1346013] Introduce API support for requesting mandatory locks
bugzilla at redhat.com
- [Bugs] [Bug 1350407] Sharding may create shards beyond it's size
bugzilla at redhat.com
- [Bugs] [Bug 1385592] Fix some spelling mistakes in comments and log messages
bugzilla at redhat.com
- [Bugs] [Bug 1403577] GlusterFS 3.8.8 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1385592] Fix some spelling mistakes in comments and log messages
bugzilla at redhat.com
- [Bugs] [Bug 1346013] Introduce API support for requesting mandatory locks
bugzilla at redhat.com
- [Bugs] [Bug 1350407] Sharding may create shards beyond it's size
bugzilla at redhat.com
- [Bugs] [Bug 1403599] New: Samba crashes with 3.9 and VFS module
bugzilla at redhat.com
- [Bugs] [Bug 1403612] New: With NFS root-squash the other x-bit has to be set to make dirs
bugzilla at redhat.com
- [Bugs] [Bug 1376464] [RFE] enable sharding with virt profile - /var/lib/glusterd/ groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1376464] [RFE] enable sharding with virt profile - /var/lib/glusterd/ groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1403644] New: Need a CentOS slave to debug a test failure seen only with my patch (http ://review.gluster.org/16046)
bugzilla at redhat.com
- [Bugs] [Bug 1397911] GlusterFS 3.8.7 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1402730] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1403646] New: self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1373498] Creation of file hangs while doing ls from another mount.
bugzilla at redhat.com
- [Bugs] [Bug 1379673] Creation of file hangs while doing ls from another mount.
bugzilla at redhat.com
- [Bugs] [Bug 1403648] New: [Perf] : ls takes a lot of time when run with creates from another mount.
bugzilla at redhat.com
- [Bugs] [Bug 1403648] [Perf] : ls takes a lot of time when run with creates from another mount.
bugzilla at redhat.com
- [Bugs] [Bug 1403648] [Perf] : ls takes a lot of time when run with creates from another mount.
bugzilla at redhat.com
- [Bugs] [Bug 1318100] RFE : SELinux translator to support setting SELinux contexts on files in a glusterfs volume
bugzilla at redhat.com
- [Bugs] [Bug 1377584] memory leak problems are found in daemon:glusterd, server: glusterfsd and client:glusterfs
bugzilla at redhat.com
- [Bugs] [Bug 1402828] Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1377062] /var/tmp/rpm-tmp.KPCugR: line 2: /bin/systemctl: No such file or directory
bugzilla at redhat.com
- [Bugs] [Bug 1392347] RFE for glusterfind
bugzilla at redhat.com
- [Bugs] [Bug 1392347] RFE for glusterfind
bugzilla at redhat.com
- [Bugs] [Bug 1403187] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403187] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403192] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403192] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403644] Need a CentOS slave to debug a test failure seen only with my patch (http ://review.gluster.org/16046)
bugzilla at redhat.com
- [Bugs] [Bug 1393678] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1389740] build: python on Debian-based dists use .../lib/python2.7/ dist-packages instead of .../site-packages
bugzilla at redhat.com
- [Bugs] [Bug 1402538] Assertion Failed Error messages in rebalance logs during rebalance
bugzilla at redhat.com
- [Bugs] [Bug 1397911] GlusterFS 3.8.7 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1397911] GlusterFS 3.8.7 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1388509] gluster volume heal info "healed" and "heal-failed" showing wrong information
bugzilla at redhat.com
- [Bugs] [Bug 1385592] Fix some spelling mistakes in comments and log messages
bugzilla at redhat.com
- [Bugs] [Bug 1397911] GlusterFS 3.8.7 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1346013] Introduce API support for requesting mandatory locks
bugzilla at redhat.com
- [Bugs] [Bug 1397911] GlusterFS 3.8.7 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1350407] Sharding may create shards beyond it's size
bugzilla at redhat.com
- [Bugs] [Bug 1397911] GlusterFS 3.8.7 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1378547] Asynchronous Unsplit-brain still causes Input/ Output Error on system calls
bugzilla at redhat.com
- [Bugs] [Bug 1402730] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1393678] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1403646] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1403646] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1402730] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1403646] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1403743] New: self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1403743] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1403743] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1403780] New: Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1403648] [Perf] : ls takes a lot of time when run with creates from another mount.
bugzilla at redhat.com
- [Bugs] [Bug 1403780] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1379673] Creation of file hangs while doing ls from another mount.
bugzilla at redhat.com
- [Bugs] [Bug 1403780] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1403780] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1402828] Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1402828] Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1399780] Use standard refcounting for structures where possible
bugzilla at redhat.com
- [Bugs] [Bug 1202274] Minor improvements and code cleanup for libgfapi
bugzilla at redhat.com
- [Bugs] [Bug 1399780] Use standard refcounting for structures where possible
bugzilla at redhat.com
- [Bugs] [Bug 1390050] Elasticsearch get CorruptIndexException errors when running with GlusterFS persistent storage
bugzilla at redhat.com
- [Bugs] [Bug 1403889] New: DHT: If Re-balance fails while migrating the file, T bit is getting set on the file
bugzilla at redhat.com
- [Bugs] [Bug 1403889] DHT: If Re-balance fails while migrating the file, T bit is getting set on the file
bugzilla at redhat.com
- [Bugs] [Bug 1403743] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1403889] DHT: If Re-balance fails while migrating the file, T bit is getting set on the file
bugzilla at redhat.com
- [Bugs] [Bug 1403889] DHT: If Re-balance fails while migrating the file, T bit is getting set on the file
bugzilla at redhat.com
- [Bugs] [Bug 1378547] Asynchronous Unsplit-brain still causes Input/ Output Error on system calls
bugzilla at redhat.com
- [Bugs] [Bug 1403743] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1402730] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1403984] New: Node node high CPU - healing entries increasing
bugzilla at redhat.com
- [Bugs] [Bug 1403984] Node node high CPU - healing entries increasing
bugzilla at redhat.com
- [Bugs] [Bug 1403130] [GANESHA] Adding a node to cluster failed to allocate resource-agents to new node.
bugzilla at redhat.com
- [Bugs] [Bug 1403889] DHT: If Re-balance fails while migrating the file, T bit is getting set on the file
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1402828] Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1404101] New: Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1404101] Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1404101] Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1404101] Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1403780] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1403780] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1403780] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1403780] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1404104] New: Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1404104] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1404104] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1404104] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1403780] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1404104] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1404105] New: Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1404105] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1404105] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1404105] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1404118] New: Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1404118] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1403121] Asynchronous Unsplit-brain still causes Input/ Output Error on system calls
bugzilla at redhat.com
- [Bugs] [Bug 1403121] Asynchronous Unsplit-brain still causes Input/ Output Error on system calls
bugzilla at redhat.com
- [Bugs] [Bug 1404118] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1404101] Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1404104] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1404105] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1402730] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1392347] RFE for glusterfind
bugzilla at redhat.com
- [Bugs] [Bug 1392347] RFE for glusterfind
bugzilla at redhat.com
- [Bugs] [Bug 1404118] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1404128] New: Unable to create Volume : already part of a volume
bugzilla at redhat.com
- [Bugs] [Bug 1385526] Tracker bug for GlusterFS-v3.7.17
bugzilla at redhat.com
- [Bugs] [Bug 1404129] New: Tracker bug for GlusterFS-v3.7.19
bugzilla at redhat.com
- [Bugs] [Bug 1403130] [GANESHA] Adding a node to cluster failed to allocate resource-agents to new node.
bugzilla at redhat.com
- [Bugs] [Bug 1403130] [GANESHA] Adding a node to cluster failed to allocate resource-agents to new node.
bugzilla at redhat.com
- [Bugs] [Bug 1404133] New: [GANESHA] Adding a node to cluster failed to allocate resource-agents to new node.
bugzilla at redhat.com
- [Bugs] [Bug 1404133] [GANESHA] Adding a node to cluster failed to allocate resource-agents to new node.
bugzilla at redhat.com
- [Bugs] [Bug 1391451] md-cache: Invalidate cache entry in case of OPEN with O_TRUNC
bugzilla at redhat.com
- [Bugs] [Bug 1391851] removal of file from nfs mount crashs ganesha server
bugzilla at redhat.com
- [Bugs] [Bug 1392181] "gluster vol status all clients --xml" get malformed at times, causes gstatus to fail
bugzilla at redhat.com
- [Bugs] [Bug 1392289] gfapi clients crash while using async calls due to double fd_unref
bugzilla at redhat.com
- [Bugs] [Bug 1392715] Quota version not changing in the quota.conf after upgrading to 3.7.1 from 3.6.1
bugzilla at redhat.com
- [Bugs] [Bug 1392853] Hosted Engine VM paused post replace-brick operation
bugzilla at redhat.com
- [Bugs] [Bug 1392867] The FUSE client log is filling up with posix_acl_default and posix_acl_access messages
bugzilla at redhat.com
- [Bugs] [Bug 1393631] Better logging when reporting failures of the kind "< file-path> Failing MKNOD as quorum is not met"
bugzilla at redhat.com
- [Bugs] [Bug 1391448] md-cache: Invalidate cache entry in case of OPEN with O_TRUNC
bugzilla at redhat.com
- [Bugs] [Bug 1391450] md-cache: Invalidate cache entry in case of OPEN with O_TRUNC
bugzilla at redhat.com
- [Bugs] [Bug 1372171] Tracker bug for GlusterFS-v3.7.16
bugzilla at redhat.com
- [Bugs] [Bug 1392286] gfapi clients crash while using async calls due to double fd_unref
bugzilla at redhat.com
- [Bugs] [Bug 1392288] gfapi clients crash while using async calls due to double fd_unref
bugzilla at redhat.com
- [Bugs] [Bug 1392844] Hosted Engine VM paused post replace-brick operation
bugzilla at redhat.com
- [Bugs] [Bug 1392846] Hosted Engine VM paused post replace-brick operation
bugzilla at redhat.com
- [Bugs] [Bug 1393629] Better logging when reporting failures of the kind "< file-path> Failing MKNOD as quorum is not met"
bugzilla at redhat.com
- [Bugs] [Bug 1393630] Better logging when reporting failures of the kind "< file-path> Failing MKNOD as quorum is not met"
bugzilla at redhat.com
- [Bugs] [Bug 1392366] trashcan max file limit cannot go beyond 1GB
bugzilla at redhat.com
- [Bugs] [Bug 1394188] SMB[md-cache Private Build]: Error messages in brick logs related to upcall_cache_invalidate gf_uuid_is_null
bugzilla at redhat.com
- [Bugs] [Bug 1395245] glusterd crash
bugzilla at redhat.com
- [Bugs] [Bug 1396419] [md-cache]: All bricks crashed while performing symlink and rename from client at the same time
bugzilla at redhat.com
- [Bugs] [Bug 1397662] libgfapi core dumps
bugzilla at redhat.com
- [Bugs] [Bug 1389422] SMB[md-cache Private Build]: Error messages in brick logs related to upcall_cache_invalidate gf_uuid_is_null
bugzilla at redhat.com
- [Bugs] [Bug 1392363] trashcan max file limit cannot go beyond 1GB
bugzilla at redhat.com
- [Bugs] [Bug 1392364] trashcan max file limit cannot go beyond 1GB
bugzilla at redhat.com
- [Bugs] [Bug 1393282] (glusterfs-3.7.18) Tracker bug for GlusterFS-v3.7.18
bugzilla at redhat.com
- [Bugs] [Bug 1393282] (glusterfs-3.7.18) Tracker bug for GlusterFS-v3.7.18
bugzilla at redhat.com
- [Bugs] [Bug 1404118] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1404118] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1386766] trashcan max file limit cannot go beyond 1GB
bugzilla at redhat.com
- [Bugs] [Bug 1392363] trashcan max file limit cannot go beyond 1GB
bugzilla at redhat.com
- [Bugs] [Bug 1392364] trashcan max file limit cannot go beyond 1GB
bugzilla at redhat.com
- [Bugs] [Bug 1392366] trashcan max file limit cannot go beyond 1GB
bugzilla at redhat.com
- [Bugs] [Bug 1404133] [GANESHA] Adding a node to cluster failed to allocate resource-agents to new node.
bugzilla at redhat.com
- [Bugs] [Bug 1393678] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1402215] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1403646] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1404168] New: Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404168] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1402710] ls and move hung on disperse volume
bugzilla at redhat.com
- [Bugs] [Bug 1404168] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404181] New: [Ganesha+SSL] : Ganesha crashes on all nodes on volume restarts
bugzilla at redhat.com
- [Bugs] [Bug 1404181] [Ganesha+SSL] : Ganesha crashes on all nodes on volume restarts
bugzilla at redhat.com
- [Bugs] [Bug 1399450] Backport few of the md-cache enhancements from master to 3.9
bugzilla at redhat.com
- [Bugs] [Bug 1399024] performance.read-ahead on results in processes on client stuck in IO wait
bugzilla at redhat.com
- [Bugs] [Bug 1399015] performance.read-ahead on results in processes on client stuck in IO wait
bugzilla at redhat.com
- [Bugs] [Bug 1396880] refresh-config fails and crashes ganesha when mdcache is enabled on the volume.
bugzilla at redhat.com
- [Bugs] [Bug 1404104] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1353561] Multiple bricks could crash after TCP port probing
bugzilla at redhat.com
- [Bugs] [Bug 1403646] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1402215] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1402710] ls and move hung on disperse volume
bugzilla at redhat.com
- [Bugs] [Bug 1390521] qemu gfapi in 3.8.5 is broken
bugzilla at redhat.com
- [Bugs] [Bug 1392347] RFE for glusterfind
bugzilla at redhat.com
- [Bugs] [Bug 1356453] DHT: slow readdirp performance
bugzilla at redhat.com
- [Bugs] [Bug 1403156] Memory leak on graph switch
bugzilla at redhat.com
- [Bugs] [Bug 1402254] compile warning unused variable
bugzilla at redhat.com
- [Bugs] [Bug 1402261] cli: compile warnings (unused var) if building without bd xlator
bugzilla at redhat.com
- [Bugs] [Bug 1402037] GlusterFS - Server halts updateprocess ... AGAIN
bugzilla at redhat.com
- [Bugs] [Bug 1403612] With NFS root-squash the other x-bit has to be set to make dirs
bugzilla at redhat.com
- [Bugs] [Bug 1403125] Gluster Block Storage CLI Integration
bugzilla at redhat.com
- [Bugs] [Bug 1403599] Samba crashes with 3.9 and VFS module
bugzilla at redhat.com
- [Bugs] [Bug 1404129] Tracker bug for GlusterFS-v3.7.19
bugzilla at redhat.com
- [Bugs] [Bug 1402621] High load one node, gluster fuse clients hang, heal info does not complete
bugzilla at redhat.com
- [Bugs] [Bug 1402237] Bad spacing in error message in cli
bugzilla at redhat.com
- [Bugs] [Bug 1402661] Samba crash when mounting a distributed dispersed volume over CIFS
bugzilla at redhat.com
- [Bugs] [Bug 1402710] ls and move hung on disperse volume
bugzilla at redhat.com
- [Bugs] [Bug 1402297] Enabling Quota on volumes leads inconsistent data
bugzilla at redhat.com
- [Bugs] [Bug 1402296] Enabling Quota on volumes leads inconsistent data
bugzilla at redhat.com
- [Bugs] [Bug 1403984] Node node high CPU - healing entries increasing
bugzilla at redhat.com
- [Bugs] [Bug 1404128] Unable to create Volume : already part of a volume
bugzilla at redhat.com
- [Bugs] [Bug 1402406] Client stale file handle error in dht-linkfile.c under SPEC SFS 2014 VDA workload
bugzilla at redhat.com
- [Bugs] [Bug 1403889] DHT: If Re-balance fails while migrating the file, T bit is getting set on the file
bugzilla at redhat.com
- [Bugs] [Bug 1392347] RFE for glusterfind
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1393694] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1385474] [granular entry sh] - Provide a CLI to enable/ disable the feature that checks that there are no heals pending before allowing the operation
bugzilla at redhat.com
- [Bugs] [Bug 1404168] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404168] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404168] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1402710] ls and move hung on disperse volume
bugzilla at redhat.com
- [Bugs] [Bug 1375849] [RFE] enable sharding with virt profile - /var/lib/glusterd/ groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1402216] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1375849] [RFE] enable sharding with virt profile - /var/lib/glusterd/ groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1402216] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1402216] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1402710] ls and move hung on disperse volume
bugzilla at redhat.com
- [Bugs] [Bug 1403984] Node node high CPU - healing entries increasing
bugzilla at redhat.com
- [Bugs] [Bug 1200268] Upcall: Support for lease_locks
bugzilla at redhat.com
- [Bugs] [Bug 1397911] GlusterFS 3.8.7 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1385794] io-throttling: Calculate moving averages and throttle offending hosts
bugzilla at redhat.com
- [Bugs] [Bug 1385794] io-throttling: Calculate moving averages and throttle offending hosts
bugzilla at redhat.com
- [Bugs] [Bug 1385794] io-throttling: Calculate moving averages and throttle offending hosts
bugzilla at redhat.com
- [Bugs] [Bug 1311460] unable to mount a glusterfs volume on clients
bugzilla at redhat.com
- [Bugs] [Bug 1397911] GlusterFS 3.8.7 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1311460] unable to mount a glusterfs volume on clients
bugzilla at redhat.com
- [Bugs] [Bug 1404410] New: [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1404410] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1404410] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1378547] Asynchronous Unsplit-brain still causes Input/ Output Error on system calls
bugzilla at redhat.com
- [Bugs] [Bug 1404424] New: The data-self-heal option is not honored in AFR
bugzilla at redhat.com
- [Bugs] [Bug 1404424] The data-self-heal option is not honored in AFR
bugzilla at redhat.com
- [Bugs] [Bug 1404437] New: Allow to set dynamic library path from env variable
bugzilla at redhat.com
- [Bugs] [Bug 1404437] Allow to set dynamic library path from env variable
bugzilla at redhat.com
- [Bugs] [Bug 1404442] New: The root inode is not cached by md-cache
bugzilla at redhat.com
- [Bugs] [Bug 1404442] The root inode is not cached by md-cache
bugzilla at redhat.com
- [Bugs] [Bug 1404410] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1404442] The root inode is not cached by md-cache
bugzilla at redhat.com
- [Bugs] [Bug 1404442] The root inode is not cached by md-cache
bugzilla at redhat.com
- [Bugs] [Bug 1404442] The root inode is not cached by md-cache
bugzilla at redhat.com
- [Bugs] [Bug 1404442] The root inode is not cached by md-cache
bugzilla at redhat.com
- [Bugs] [Bug 1404442] The root inode is not cached by md-cache
bugzilla at redhat.com
- [Bugs] [Bug 1378547] Asynchronous Unsplit-brain still causes Input/ Output Error on system calls
bugzilla at redhat.com
- [Bugs] [Bug 1401801] [RFE] Use Host UUID to find local nodes to spawn workers
bugzilla at redhat.com
- [Bugs] [Bug 1402216] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1402710] ls and move hung on disperse volume
bugzilla at redhat.com
- [Bugs] [Bug 1404572] New: ls and move hung on disperse volume
bugzilla at redhat.com
- [Bugs] [Bug 1404572] ls and move hung on disperse volume
bugzilla at redhat.com
- [Bugs] [Bug 1389781] build: python on Debian-based dists use .../lib/python2.7/ dist-packages instead of .../site-packages
bugzilla at redhat.com
- [Bugs] [Bug 1404573] New: tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1402297] Enabling Quota on volumes leads inconsistent data
bugzilla at redhat.com
- [Bugs] [Bug 1403109] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1395652] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1397663] libgfapi core dumps
bugzilla at redhat.com
- [Bugs] [Bug 1398501] [granular entry sh] - Provide a CLI to enable/ disable the feature that checks that there are no heals pending before allowing the operation
bugzilla at redhat.com
- [Bugs] [Bug 1399018] performance.read-ahead on results in processes on client stuck in IO wait
bugzilla at redhat.com
- [Bugs] [Bug 1399088] geo-replica slave node goes faulty for non-root user session due to fail to locate gluster binary
bugzilla at redhat.com
- [Bugs] [Bug 1399090] [geo-rep]: Worker crashes seen while renaming directories in loop
bugzilla at redhat.com
- [Bugs] [Bug 1399130] SEEK_HOLE/ SEEK_DATA doesn't return the correct offset
bugzilla at redhat.com
- [Bugs] [Bug 1399130] SEEK_HOLE/ SEEK_DATA doesn't return the correct offset
bugzilla at redhat.com
- [Bugs] [Bug 1399635] Refresh config fails while exporting subdirectories within a volume
bugzilla at redhat.com
- [Bugs] [Bug 1385474] [granular entry sh] - Provide a CLI to enable/ disable the feature that checks that there are no heals pending before allowing the operation
bugzilla at redhat.com
- [Bugs] [Bug 1398500] [granular entry sh] - Provide a CLI to enable/ disable the feature that checks that there are no heals pending before allowing the operation
bugzilla at redhat.com
- [Bugs] [Bug 1379228] smoke test fails with read/write failed (ENOTCONN)
bugzilla at redhat.com
- [Bugs] [Bug 1399015] performance.read-ahead on results in processes on client stuck in IO wait
bugzilla at redhat.com
- [Bugs] [Bug 1388150] geo-replica slave node goes faulty for non-root user session due to fail to locate gluster binary
bugzilla at redhat.com
- [Bugs] [Bug 1396332] Refresh config fails while exporting subdirectories within a volume
bugzilla at redhat.com
- [Bugs] [Bug 1400459] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1400573] Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1400573] Ganesha services are not stopped when pacemaker quorum is lost
bugzilla at redhat.com
- [Bugs] [Bug 1400802] glusterfs_ctx_defaults_init is re-initializing ctx->locks
bugzilla at redhat.com
- [Bugs] [Bug 1400927] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1402672] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1403192] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403646] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1400926] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1403120] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403187] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1397911] GlusterFS 3.8.7 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1397911] GlusterFS 3.8.7 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1395221] Remove-brick: Remove-brick rebalance failed during continuous lookup+directory rename
bugzilla at redhat.com
- [Bugs] [Bug 1395221] Remove-brick: Remove-brick rebalance failed during continuous lookup+directory rename
bugzilla at redhat.com
- [Bugs] [Bug 1395221] Remove-brick: Remove-brick rebalance failed during continuous lookup+directory rename
bugzilla at redhat.com
- [Bugs] [Bug 1389781] build: python on Debian-based dists use .../lib/python2.7/ dist-packages instead of .../site-packages
bugzilla at redhat.com
- [Bugs] [Bug 1404168] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404581] New: Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404168] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404583] New: Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404168] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404586] New: Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404581] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404581] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1396780] Make debugging EACCES errors easier to debug
bugzilla at redhat.com
- [Bugs] [Bug 1396780] Make debugging EACCES errors easier to debug
bugzilla at redhat.com
- [Bugs] [Bug 1404583] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404583] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404586] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404586] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1396780] Make debugging EACCES errors easier to debug
bugzilla at redhat.com
- [Bugs] [Bug 1404572] ls and move hung on disperse volume
bugzilla at redhat.com
- [Bugs] [Bug 1393694] The directories get renamed when data bricks are offline in 4*(2+1) volume
bugzilla at redhat.com
- [Bugs] [Bug 1401801] [RFE] Use Host UUID to find local nodes to spawn workers
bugzilla at redhat.com
- [Bugs] [Bug 1402216] [RFE] enable sharding and strict-o-direct with virt profile - /var/lib/glusterd /groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1404128] Volume creation fails with error "host is not in ' Peer in Cluster' state"
bugzilla at redhat.com
- [Bugs] [Bug 1324531] RFE : Create trash directory only when its is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1395221] Remove-brick: Remove-brick rebalance failed during continuous lookup+directory rename
bugzilla at redhat.com
- [Bugs] [Bug 1264849] RFE : Create trash directory only when its is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1404118] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1404133] [GANESHA] Adding a node to cluster failed to allocate resource-agents to new node.
bugzilla at redhat.com
- [Bugs] [Bug 1404118] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1404118] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1404128] Volume creation fails with error "host is not in ' Peer in Cluster' state"
bugzilla at redhat.com
- [Bugs] [Bug 1393678] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1393678] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1404654] New: io-stats miss statistics when fh is not newly created
bugzilla at redhat.com
- [Bugs] [Bug 1404104] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1398331] With compound fops on, client process crashes when a replica is brought down while IO is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1404654] io-stats miss statistics when fh is not newly created
bugzilla at redhat.com
- [Bugs] [Bug 1402621] High load one node, gluster fuse clients hang, heal info does not complete
bugzilla at redhat.com
- [Bugs] [Bug 1404678] New: [geo-rep]: Config commands fail when the status is ' Created'
bugzilla at redhat.com
- [Bugs] [Bug 1404678] [geo-rep]: Config commands fail when the status is 'Created'
bugzilla at redhat.com
- [Bugs] [Bug 1404678] [geo-rep]: Config commands fail when the status is 'Created'
bugzilla at redhat.com
- [Bugs] [Bug 1404678] [geo-rep]: Config commands fail when the status is 'Created'
bugzilla at redhat.com
- [Bugs] [Bug 1404654] io-stats miss statistics when fh is not newly created
bugzilla at redhat.com
- [Bugs] [Bug 1404678] [geo-rep]: Config commands fail when the status is 'Created'
bugzilla at redhat.com
- [Bugs] [Bug 1404572] ls and move hung on disperse volume
bugzilla at redhat.com
- [Bugs] [Bug 1404572] ls and move hung on disperse volume
bugzilla at redhat.com
- [Bugs] [Bug 1404693] New: `rm` of file on mirrored glusterfs fs sometimes blocks indefinitely
bugzilla at redhat.com
- [Bugs] [Bug 1387241] Pass proper permission to acl_permit() in posix_acl_open()
bugzilla at redhat.com
- [Bugs] [Bug 1404766] New: The root inode is not cached by md-cache
bugzilla at redhat.com
- [Bugs] [Bug 1404766] The root inode is not cached by md-cache
bugzilla at redhat.com
- [Bugs] [Bug 1403108] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1403108] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1400803] glusterfs_ctx_defaults_init is re-initializing ctx->locks
bugzilla at redhat.com
- [Bugs] [Bug 1400803] glusterfs_ctx_defaults_init is re-initializing ctx->locks
bugzilla at redhat.com
- [Bugs] [Bug 1404766] The root inode is not cached by md-cache
bugzilla at redhat.com
- [Bugs] [Bug 1402261] cli: compile warnings (unused var) if building without bd xlator
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1402261] cli: compile warnings (unused var) if building without bd xlator
bugzilla at redhat.com
- [Bugs] [Bug 1378547] Asynchronous Unsplit-brain still causes Input/ Output Error on system calls
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1404410] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1404442] The root inode is not cached by md-cache
bugzilla at redhat.com
- [Bugs] [Bug 1404442] The root inode is not cached by md-cache
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1401597] AFR fix locking bug in self-heal code path
bugzilla at redhat.com
- [Bugs] [Bug 1404905] New: DHT : file rename operation is successful but log has error 'key: trusted.glusterfs.dht.linkto error:File exists' , 'setting xattrs on < old_filename> failed (File exists)'
bugzilla at redhat.com
- [Bugs] [Bug 1404905] DHT : file rename operation is successful but log has error 'key:trusted.glusterfs.dht.linkto error:File exists' , 'setting xattrs on <old_filename> failed (File exists)'
bugzilla at redhat.com
- [Bugs] [Bug 1376694] Disable gluster-NFS post upgrade to >= RHGS 3.2 release
bugzilla at redhat.com
- [Bugs] [Bug 1376694] Disable gluster-NFS post upgrade to >= RHGS 3.2 release
bugzilla at redhat.com
- [Bugs] [Bug 1378547] Asynchronous Unsplit-brain still causes Input/ Output Error on system calls
bugzilla at redhat.com
- [Bugs] [Bug 1403577] GlusterFS 3.8.8 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1399432] A hard link is lost during rebalance+lookup
bugzilla at redhat.com
- [Bugs] [Bug 1399432] A hard link is lost during rebalance+lookup
bugzilla at redhat.com
- [Bugs] [Bug 1356960] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1393316] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1392347] RFE for glusterfind
bugzilla at redhat.com
- [Bugs] [Bug 1234054] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1377437] [RFE] Modularizing backend snapshot creation as separate plugin
bugzilla at redhat.com
- [Bugs] [Bug 1388461] [Eventing]: BRICK_DISCONNECTED events seen when a tier volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1388461] [Eventing]: BRICK_DISCONNECTED events seen when a tier volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1385474] [granular entry sh] - Provide a CLI to enable/ disable the feature that checks that there are no heals pending before allowing the operation
bugzilla at redhat.com
- [Bugs] [Bug 1399031] build: add systemd dependency to glusterfs sub-package
bugzilla at redhat.com
- [Bugs] [Bug 1367665] rotated FUSE mount log is using to populate the information after log rotate.
bugzilla at redhat.com
- [Bugs] [Bug 1401597] AFR fix locking bug in self-heal code path
bugzilla at redhat.com
- [Bugs] [Bug 1404410] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1404410] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1405002] New: [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1405002] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1404410] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1405002] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1405004] New: [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1405004] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1405002] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1405004] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1404410] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1405002] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1405004] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1404181] [Ganesha+SSL] : Ganesha crashes on all nodes on volume restarts
bugzilla at redhat.com
- [Bugs] [Bug 1404181] [Ganesha+SSL] : Ganesha crashes on all nodes on volume restarts
bugzilla at redhat.com
- [Bugs] [Bug 1404181] [Ganesha+SSL] : Ganesha crashes on all nodes on volume restarts
bugzilla at redhat.com
- [Bugs] [Bug 1393466] Request branch creation for FB commits and addition of merge rights for specific users
bugzilla at redhat.com
- [Bugs] [Bug 1404181] [Ganesha+SSL] : Ganesha crashes on all nodes on volume restarts
bugzilla at redhat.com
- [Bugs] [Bug 1361098] Feature: Entry self-heal performance enhancements using more granular changelogs
bugzilla at redhat.com
- [Bugs] [Bug 1404442] The root inode is not cached by md-cache
bugzilla at redhat.com
- [Bugs] [Bug 1234054] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1375849] [RFE] enable sharding with virt profile - /var/lib/glusterd/ groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1234054] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405002] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1234054] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405126] New: `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405126] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405126] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1234054] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405126] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405130] New: `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405130] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405130] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405130] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1402172] Peer unexpectedly disconnected
bugzilla at redhat.com
- [Bugs] [Bug 1402172] Peer unexpectedly disconnected
bugzilla at redhat.com
- [Bugs] [Bug 1405004] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1405002] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1405004] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1405165] New: Allow user to disable mem-pool
bugzilla at redhat.com
- [Bugs] [Bug 1405165] Allow user to disable mem-pool
bugzilla at redhat.com
- [Bugs] [Bug 1405126] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405126] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405130] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1390521] qemu gfapi in 3.8.5 is broken
bugzilla at redhat.com
- [Bugs] [Bug 1390521] qemu gfapi in 3.8.5 is broken
bugzilla at redhat.com
- [Bugs] [Bug 1375849] [RFE] enable sharding with virt profile - /var/lib/glusterd/ groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1356960] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1234054] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405126] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405130] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1234054] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405126] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405130] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1390521] qemu gfapi in 3.8.5 is broken
bugzilla at redhat.com
- [Bugs] [Bug 1405301] New: Fix the failure in tests/basic/gfapi/bug1291259.t
bugzilla at redhat.com
- [Bugs] [Bug 1405301] Fix the failure in tests/basic/gfapi/bug1291259.t
bugzilla at redhat.com
- [Bugs] [Bug 1405301] Fix the failure in tests/basic/gfapi/bug1291259.t
bugzilla at redhat.com
- [Bugs] [Bug 1405305] New: Fix the failure in tests/basic/gfapi/bug1291259.t
bugzilla at redhat.com
- [Bugs] [Bug 1405308] New: [compound fops] fuse mount crashed when VM installation is in progress & one of the brick killed
bugzilla at redhat.com
- [Bugs] [Bug 1402212] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1402212] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1405305] Fix the failure in tests/basic/gfapi/bug1291259.t
bugzilla at redhat.com
- [Bugs] [Bug 1405308] [compound fops] fuse mount crashed when VM installation is in progress & one of the brick killed
bugzilla at redhat.com
- [Bugs] [Bug 1405308] [compound fops] fuse mount crashed when VM installation is in progress & one of the brick killed
bugzilla at redhat.com
- [Bugs] [Bug 1404573] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1404573] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1398859] tier: performance improvement with multi-threaded brick processing
bugzilla at redhat.com
- [Bugs] [Bug 1404573] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1398859] tier: performance improvement with multi-threaded brick processing
bugzilla at redhat.com
- [Bugs] [Bug 1404118] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1380257] [RFE] eventsapi/georep: Events are not available for Checkpoint and Status Change
bugzilla at redhat.com
- [Bugs] [Bug 1405147] glusterfs (posix-acl xlator layer) checks for " write permission" instead for "file owner" during open() when writing to a file
bugzilla at redhat.com
- [Bugs] [Bug 1405390] New: probable 'tar' failure after end of smoke test
bugzilla at redhat.com
- [Bugs] [Bug 1405147] glusterfs (posix-acl xlator layer) checks for " write permission" instead for "file owner" during open() when writing to a file
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1390521] qemu gfapi in 3.8.5 is broken
bugzilla at redhat.com
- [Bugs] [Bug 1376695] Provide a prompt when enabling gluster-NFS
bugzilla at redhat.com
- [Bugs] [Bug 1376694] Disable gluster-NFS post upgrade to >= RHGS 3.2 release
bugzilla at redhat.com
- [Bugs] [Bug 1376695] Provide a prompt when enabling gluster-NFS
bugzilla at redhat.com
- [Bugs] [Bug 1402406] Client stale file handle error in dht-linkfile.c under SPEC SFS 2014 VDA workload
bugzilla at redhat.com
- [Bugs] [Bug 1404573] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1404573] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1404573] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1405450] New: tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1404573] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1405451] New: tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1405450] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1405451] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1405450] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1405451] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1405450] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1405451] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1405478] New: Keepalive should be set for IPv6 & IPv4
bugzilla at redhat.com
- [Bugs] [Bug 1379673] Creation of file hangs while doing ls from another mount.
bugzilla at redhat.com
- [Bugs] [Bug 1405478] Keepalive should be set for IPv6 & IPv4
bugzilla at redhat.com
- [Bugs] [Bug 1403120] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403120] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1405554] New: Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1405554] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1405554] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405576] New: [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405576] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405576] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405577] New: [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405577] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405576] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405576] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405577] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405478] Keepalive should be set for IPv6 & IPv4
bugzilla at redhat.com
- [Bugs] [Bug 1405625] New: Add Halo geo-replication support (step 1).
bugzilla at redhat.com
- [Bugs] [Bug 1405625] Add Halo geo-replication support (step 1).
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1405628] New: Socket search code at startup is slow
bugzilla at redhat.com
- [Bugs] [Bug 1405628] Socket search code at startup is slow
bugzilla at redhat.com
- [Bugs] [Bug 1404118] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1404118] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1404437] Allow to set dynamic library path from env variable
bugzilla at redhat.com
- [Bugs] [Bug 1405625] Add Halo geo-replication support (step 1).
bugzilla at redhat.com
- [Bugs] [Bug 1405625] Add Halo geo-replication support (step 1).
bugzilla at redhat.com
- [Bugs] [Bug 1405775] New: GlusterFS process crashed after add-brick
bugzilla at redhat.com
- [Bugs] [Bug 1405577] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405308] [compound fops] fuse mount crashed when VM installation is in progress & one of the brick killed
bugzilla at redhat.com
- [Bugs] [Bug 1405308] [compound fops] fuse mount crashed when VM installation is in progress & one of the brick killed
bugzilla at redhat.com
- [Bugs] [Bug 1405576] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405577] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405775] GlusterFS process crashed after add-brick
bugzilla at redhat.com
- [Bugs] [Bug 1404905] DHT : file rename operation is successful but log has error 'key:trusted.glusterfs.dht.linkto error:File exists' , 'setting xattrs on <old_filename> failed (File exists)'
bugzilla at redhat.com
- [Bugs] [Bug 1405885] New: Fix potential leaks in INODELK cbk in protocol/client
bugzilla at redhat.com
- [Bugs] [Bug 1405885] Fix potential leaks in INODELK cbk in protocol/client
bugzilla at redhat.com
- [Bugs] [Bug 1405885] Fix potential leaks in INODELK cbk in protocol/client
bugzilla at redhat.com
- [Bugs] [Bug 1405885] Fix potential leaks in INODELK cbk in protocol/client
bugzilla at redhat.com
- [Bugs] [Bug 1405886] New: Fix potential leaks in INODELK cbk in protocol/client
bugzilla at redhat.com
- [Bugs] [Bug 1405886] Fix potential leaks in INODELK cbk in protocol/client
bugzilla at redhat.com
- [Bugs] [Bug 1405886] Fix potential leaks in INODELK cbk in protocol/client
bugzilla at redhat.com
- [Bugs] [Bug 1405554] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1405889] New: Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1405554] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1405889] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1405890] New: Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1405890] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1405889] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1405890] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1405889] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1405890] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1384459] Track the client that performed readdirp
bugzilla at redhat.com
- [Bugs] [Bug 1403144] [GANESHA] Unable to export the ganesha volume after doing volume start and stop
bugzilla at redhat.com
- [Bugs] [Bug 1405902] New: Fix spurious failure in tests/bugs/replicate/ bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1405902] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1405775] GlusterFS process crashed after add-brick
bugzilla at redhat.com
- [Bugs] [Bug 1405902] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1405902] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1404118] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1405909] New: Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1405909] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1405909] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1405909] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1405902] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1405902] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1404101] Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1404101] Snapshot: Snapshot create command fails when gluster-shared-storage volume is stopped
bugzilla at redhat.com
- [Bugs] [Bug 1389746] Refresh config fails while exporting subdirectories within a volume
bugzilla at redhat.com
- [Bugs] [Bug 1396332] Refresh config fails while exporting subdirectories within a volume
bugzilla at redhat.com
- [Bugs] [Bug 1399635] Refresh config fails while exporting subdirectories within a volume
bugzilla at redhat.com
- [Bugs] [Bug 1405918] New: Refresh config fails while exporting subdirectories within a volume
bugzilla at redhat.com
- [Bugs] [Bug 1405918] Refresh config fails while exporting subdirectories within a volume
bugzilla at redhat.com
- [Bugs] [Bug 1405918] Refresh config fails while exporting subdirectories within a volume
bugzilla at redhat.com
- [Bugs] [Bug 1405909] Snapshot: After snapshot restore failure , snapshot goes into inconsistent state
bugzilla at redhat.com
- [Bugs] [Bug 1356960] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1405775] GlusterFS process crashed after add-brick
bugzilla at redhat.com
- [Bugs] [Bug 1356960] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1405301] Fix the failure in tests/basic/gfapi/bug1291259.t
bugzilla at redhat.com
- [Bugs] [Bug 1405554] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1405889] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1405305] Fix the failure in tests/basic/gfapi/bug1291259.t
bugzilla at redhat.com
- [Bugs] [Bug 1399450] Backport few of the md-cache enhancements from master to 3.9
bugzilla at redhat.com
- [Bugs] [Bug 1397795] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1402366] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1405951] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1405951] New: NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1378300] Modifications to AFR Events
bugzilla at redhat.com
- [Bugs] [Bug 1405554] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1405889] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1405301] Fix the failure in tests/basic/gfapi/bug1291259.t
bugzilla at redhat.com
- [Bugs] [Bug 1405301] Fix the failure in tests/basic/gfapi/bug1291259.t
bugzilla at redhat.com
- [Bugs] [Bug 1405951] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1405951] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1397795] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1402366] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1405951] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1405955] New: NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1405955] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1403144] [GANESHA] Unable to export the ganesha volume after doing volume start and stop
bugzilla at redhat.com
- [Bugs] [Bug 1387241] Pass proper permission to acl_permit() in posix_acl_open()
bugzilla at redhat.com
- [Bugs] [Bug 1405918] Refresh config fails while exporting subdirectories within a volume
bugzilla at redhat.com
- [Bugs] [Bug 1379673] Creation of file hangs while doing ls from another mount.
bugzilla at redhat.com
- [Bugs] [Bug 1379673] Creation of file hangs while doing ls from another mount.
bugzilla at redhat.com
- [Bugs] [Bug 1404581] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404583] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404581] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404583] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404586] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1404586] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1395212] move static analysis to cage and create voting tests in gerrit
bugzilla at redhat.com
- [Bugs] [Bug 1403644] Need a CentOS slave to debug a test failure seen only with my patch (http ://review.gluster.org/16046)
bugzilla at redhat.com
- [Bugs] [Bug 1393466] Request branch creation for FB commits and addition of merge rights for specific users
bugzilla at redhat.com
- [Bugs] [Bug 1402661] Samba crash when mounting a distributed dispersed volume over CIFS
bugzilla at redhat.com
- [Bugs] [Bug 1406224] New: VM pauses due to storage I/O error, when one of the data brick is down with arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1406224] VM pauses due to storage I/O error, when one of the data brick is down with arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1406224] VM pauses due to storage I/O error, when one of the data brick is down with arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1393466] Request branch creation for FB commits and addition of merge rights for specific users
bugzilla at redhat.com
- [Bugs] [Bug 1406224] VM pauses due to storage I/O error, when one of the data brick is down with arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1406224] VM pauses due to storage I/O error, when one of the data brick is down with arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1406224] VM pauses due to storage I/O error, when one of the data brick is down with arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1402661] Samba crash when mounting a distributed dispersed volume over CIFS
bugzilla at redhat.com
- [Bugs] [Bug 1393466] Request branch creation for FB commits and addition of merge rights for specific users
bugzilla at redhat.com
- [Bugs] [Bug 1406249] New: [GANESHA] Deleting a node from ganesha cluster deletes the volume entry from /etc/ ganesha/ganesha.conf file
bugzilla at redhat.com
- [Bugs] [Bug 1406249] [GANESHA] Deleting a node from ganesha cluster deletes the volume entry from /etc/ ganesha/ganesha.conf file
bugzilla at redhat.com
- [Bugs] [Bug 1402661] Samba crash when mounting a distributed dispersed volume over CIFS
bugzilla at redhat.com
- [Bugs] [Bug 1406252] New: Free xdr-allocated compound request and response arrays
bugzilla at redhat.com
- [Bugs] [Bug 1362129] rename of a file can cause data loss in an arbiter volume configuration
bugzilla at redhat.com
- [Bugs] [Bug 1405902] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1406252] Free xdr-allocated compound request and response arrays
bugzilla at redhat.com
- [Bugs] [Bug 1406252] Free xdr-allocated compound request and response arrays
bugzilla at redhat.com
- [Bugs] [Bug 1405451] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1405450] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1405451] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1405450] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1390521] qemu gfapi in 3.8.5 is broken
bugzilla at redhat.com
- [Bugs] [Bug 1399914] [SAMBA-CIFS] : IO hungs in cifs mount while graph switch on & off
bugzilla at redhat.com
- [Bugs] [Bug 1399914] [SAMBA-CIFS] : IO hungs in cifs mount while graph switch on & off
bugzilla at redhat.com
- [Bugs] [Bug 1406252] Free xdr-allocated compound request and response arrays
bugzilla at redhat.com
- [Bugs] [Bug 1406252] Free xdr-allocated compound request and response arrays
bugzilla at redhat.com
- [Bugs] [Bug 1406252] Free xdr-allocated compound request and response arrays
bugzilla at redhat.com
- [Bugs] [Bug 1406252] Free xdr-allocated compound request and response arrays
bugzilla at redhat.com
- [Bugs] [Bug 1406308] New: Free xdr-allocated compound request and response arrays
bugzilla at redhat.com
- [Bugs] [Bug 1402212] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1404128] Volume creation fails with error "host is not in ' Peer in Cluster' state"
bugzilla at redhat.com
- [Bugs] [Bug 1402212] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1406348] New: [Eventing]: POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op
bugzilla at redhat.com
- [Bugs] [Bug 1406348] [Eventing]: POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op
bugzilla at redhat.com
- [Bugs] [Bug 1395221] Remove-brick: Remove-brick rebalance failed during continuous lookup+directory rename
bugzilla at redhat.com
- [Bugs] [Bug 1406410] New: [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
bugzilla at redhat.com
- [Bugs] [Bug 1406410] [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
bugzilla at redhat.com
- [Bugs] [Bug 1406411] New: Add-brick command fails when one of the replica brick is down
bugzilla at redhat.com
- [Bugs] [Bug 1406410] [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
bugzilla at redhat.com
- [Bugs] [Bug 1406411] Add-brick command fails when one of the replica brick is down
bugzilla at redhat.com
- [Bugs] [Bug 1406411] Add-brick command fails when one of the replica brick is down
bugzilla at redhat.com
- [Bugs] [Bug 1362129] rename of a file can cause data loss in an arbiter volume configuration
bugzilla at redhat.com
- [Bugs] [Bug 1406411] Add-brick command fails when one of the replica brick is down
bugzilla at redhat.com
- [Bugs] [Bug 1405577] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405576] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1362129] rename of a file can cause data loss in an arbiter volume configuration
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405576] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405576] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405577] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405577] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1388323] fuse mount point not accessible
bugzilla at redhat.com
- [Bugs] [Bug 1401534] fuse mount point not accessible
bugzilla at redhat.com
- [Bugs] [Bug 1402172] Peer unexpectedly disconnected
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1406547] New: Poor write performance with sharding enabled
bugzilla at redhat.com
- [Bugs] [Bug 1406569] New: Element missing for arbiter bricks in XML volume status details output
bugzilla at redhat.com
- [Bugs] [Bug 1406601] New: VM for Dashboard - UNCC
bugzilla at redhat.com
- [Bugs] [Bug 1406601] VM for Dashboard - UNCC
bugzilla at redhat.com
- [Bugs] [Bug 1406601] VM for Dashboard - UNCC
bugzilla at redhat.com
- [Bugs] [Bug 1406569] Element missing for arbiter bricks in XML volume status details output
bugzilla at redhat.com
- [Bugs] [Bug 1402172] Peer unexpectedly disconnected
bugzilla at redhat.com
- [Bugs] [Bug 1264849] RFE : Create trash directory only when its is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1405775] GlusterFS process crashed after add-brick
bugzilla at redhat.com
- [Bugs] [Bug 1402172] Peer unexpectedly disconnected
bugzilla at redhat.com
- [Bugs] [Bug 1405886] Fix potential leaks in INODELK cbk in protocol/client
bugzilla at redhat.com
- [Bugs] [Bug 1405885] Fix potential leaks in INODELK cbk in protocol/client
bugzilla at redhat.com
- [Bugs] [Bug 1405885] Fix potential leaks in INODELK cbk in protocol/client
bugzilla at redhat.com
- [Bugs] [Bug 1405886] Fix potential leaks in INODELK cbk in protocol/client
bugzilla at redhat.com
- [Bugs] [Bug 1406252] Free xdr-allocated compound request and response arrays
bugzilla at redhat.com
- [Bugs] [Bug 1406252] Free xdr-allocated compound request and response arrays
bugzilla at redhat.com
- [Bugs] [Bug 1406308] Free xdr-allocated compound request and response arrays
bugzilla at redhat.com
- [Bugs] [Bug 1406308] Free xdr-allocated compound request and response arrays
bugzilla at redhat.com
- [Bugs] [Bug 1313838] Tiering as separate process and in v status moving tier task to tier process
bugzilla at redhat.com
- [Bugs] [Bug 1402212] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1405902] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1402212] Fix compound fops memory leaks
bugzilla at redhat.com
- [Bugs] [Bug 1405902] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1395221] Remove-brick: Remove-brick rebalance failed during continuous lookup+directory rename
bugzilla at redhat.com
- [Bugs] [Bug 1406411] Add-brick command fails when one of the replica brick is down
bugzilla at redhat.com
- [Bugs] [Bug 1406411] Add-brick command fails when one of the replica brick is down
bugzilla at redhat.com
- [Bugs] [Bug 1406411] Add-brick command fails when one of the replica brick is down
bugzilla at redhat.com
- [Bugs] [Bug 1406411] Add-brick command fails when one of the replica brick is down
bugzilla at redhat.com
- [Bugs] [Bug 1405902] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1406739] New: Fix spurious failure in tests/bugs/replicate/ bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1406739] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1405902] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1406739] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1406740] New: Fix spurious failure in tests/bugs/replicate/ bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1406740] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1359599] BitRot : - bit-rot.signature and bit-rot.version xattr should not be set if bitrot is not enabled on volume
bugzilla at redhat.com
- [Bugs] [Bug 1406743] New: BitRot : - bit-rot.signature and bit-rot.version xattr should not be set if bitrot is not enabled on volume
bugzilla at redhat.com
- [Bugs] [Bug 1406743] BitRot : - bit-rot.signature and bit-rot.version xattr should not be set if bitrot is not enabled on volume
bugzilla at redhat.com
- [Bugs] [Bug 1406601] VM for Dashboard - UNCC
bugzilla at redhat.com
- [Bugs] [Bug 1406249] [GANESHA] Deleting a node from ganesha cluster deletes the volume entry from /etc/ ganesha/ganesha.conf file
bugzilla at redhat.com
- [Bugs] [Bug 1406224] VM pauses due to storage I/O error, when one of the data brick is down with arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1402172] Peer unexpectedly disconnected
bugzilla at redhat.com
- [Bugs] [Bug 1406878] New: ec prove tests fail in FB build environment.
bugzilla at redhat.com
- [Bugs] [Bug 1406878] ec prove tests fail in FB build environment.
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405576] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1385758] [RFE] Support multiple bricks in one process (multiplexing)
bugzilla at redhat.com
- [Bugs] [Bug 1406883] New: Prove tests which use 'build_tester' fail in FB environment.
bugzilla at redhat.com
- [Bugs] [Bug 1364422] [libgfchangelog]: If changelogs are not available for the requested time range, no distinguished error
bugzilla at redhat.com
- [Bugs] [Bug 1406895] New: RPC defaults to IPv4, need v6 default build time option.
bugzilla at redhat.com
- [Bugs] [Bug 1406895] RPC defaults to IPv4, need v6 default build time option.
bugzilla at redhat.com
- [Bugs] [Bug 1406898] New: Need build time option to default to IPv6
bugzilla at redhat.com
- [Bugs] [Bug 1406898] Need build time option to default to IPv6
bugzilla at redhat.com
- [Bugs] [Bug 1406601] VM for Dashboard - UNCC
bugzilla at redhat.com
- [Bugs] [Bug 1406914] New: RFE for setting global volume options
bugzilla at redhat.com
- [Bugs] [Bug 1406916] New: RFE for setting multiple gluster options
bugzilla at redhat.com
- [Bugs] [Bug 1406898] Need build time option to default to IPv6
bugzilla at redhat.com
- [Bugs] [Bug 1406547] Poor write performance with sharding enabled
bugzilla at redhat.com
- [Bugs] [Bug 1394883] Failed to enable nfs-ganesha after disabling nfs-ganesha cluster
bugzilla at redhat.com
- [Bugs] [Bug 1394226] "nfs-grace-monitor" timed out messages observed
bugzilla at redhat.com
- [Bugs] [Bug 1394187] SMB[md-cache Private Build]: Error messages in brick logs related to upcall_cache_invalidate gf_uuid_is_null
bugzilla at redhat.com
- [Bugs] [Bug 1396418] [md-cache]: All bricks crashed while performing symlink and rename from client at the same time
bugzilla at redhat.com
- [Bugs] [Bug 1392364] trashcan max file limit cannot go beyond 1GB
bugzilla at redhat.com
- [Bugs] [Bug 1369766] glusterd: add brick command should re-use the port for listening which is freed by remove-brick.
bugzilla at redhat.com
- [Bugs] [Bug 1394108] Continuous errors getting in the mount log when the volume mount server glusterd is down.
bugzilla at redhat.com
- [Bugs] [Bug 1395627] Labelled geo-rep checkpoints hide geo-replication status
bugzilla at redhat.com
- [Bugs] [Bug 1387976] Continuous warning messages getting when one of the cluster node is down on SSL setup.
bugzilla at redhat.com
- [Bugs] [Bug 1384412] GlusterFS 3.8.6 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1398501] [granular entry sh] - Provide a CLI to enable/ disable the feature that checks that there are no heals pending before allowing the operation
bugzilla at redhat.com
- [Bugs] [Bug 1397663] libgfapi core dumps
bugzilla at redhat.com
- [Bugs] [Bug 1399018] performance.read-ahead on results in processes on client stuck in IO wait
bugzilla at redhat.com
- [Bugs] [Bug 1398501] [granular entry sh] - Provide a CLI to enable/ disable the feature that checks that there are no heals pending before allowing the operation
bugzilla at redhat.com
- [Bugs] [Bug 1399088] geo-replica slave node goes faulty for non-root user session due to fail to locate gluster binary
bugzilla at redhat.com
- [Bugs] [Bug 1395652] ganesha-ha.conf --status should validate if the VIPs are assigned to right nodes
bugzilla at redhat.com
- [Bugs] [Bug 1399090] [geo-rep]: Worker crashes seen while renaming directories in loop
bugzilla at redhat.com
- [Bugs] [Bug 1400927] Memory leak when self healing daemon queue is full
bugzilla at redhat.com
- [Bugs] [Bug 1400802] glusterfs_ctx_defaults_init is re-initializing ctx->locks
bugzilla at redhat.com
- [Bugs] [Bug 1399635] Refresh config fails while exporting subdirectories within a volume
bugzilla at redhat.com
- [Bugs] [Bug 1402672] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1400459] [USS, SSL] .snaps directory is not reachable when I/ O encryption (SSL) is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1403192] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403646] self-heal not happening, as self-heal info lists the same pending shards to be healed
bugzilla at redhat.com
- [Bugs] [Bug 1397911] GlusterFS 3.8.7 tracker
bugzilla at redhat.com
- [Bugs] [Bug 1389781] build: python on Debian-based dists use .../lib/python2.7/ dist-packages instead of .../site-packages
bugzilla at redhat.com
- [Bugs] [Bug 1403109] Crash of glusterd when using long username with geo-replication
bugzilla at redhat.com
- [Bugs] [Bug 1375849] [RFE] enable sharding with virt profile - /var/lib/glusterd/ groups/virt
bugzilla at redhat.com
- [Bugs] [Bug 1405004] [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
bugzilla at redhat.com
- [Bugs] [Bug 1405130] `gluster volume heal <vol-name> split-brain' does not heal if data/metadata/entry self-heal options are turned off
bugzilla at redhat.com
- [Bugs] [Bug 1405577] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1404583] Upcall: Possible use after free when log level set to TRACE
bugzilla at redhat.com
- [Bugs] [Bug 1405450] tests/bugs/snapshot/ bug-1316437.t test is causing spurious failure
bugzilla at redhat.com
- [Bugs] [Bug 1401534] fuse mount point not accessible
bugzilla at redhat.com
- [Bugs] [Bug 1405886] Fix potential leaks in INODELK cbk in protocol/client
bugzilla at redhat.com
- [Bugs] [Bug 1406947] New: FB developers need push rights on release-3.8-fb branch
bugzilla at redhat.com
- [Bugs] [Bug 1406224] VM pauses due to storage I/O error, when one of the data brick is down with arbiter/replica volume
bugzilla at redhat.com
- [Bugs] [Bug 1398331] With compound fops on, client process crashes when a replica is brought down while IO is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1398331] With compound fops on, client process crashes when a replica is brought down while IO is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1406916] RFE for setting multiple gluster options
bugzilla at redhat.com
- [Bugs] [Bug 1406916] RFE for setting multiple gluster options
bugzilla at redhat.com
- [Bugs] [Bug 1404181] [Ganesha+SSL] : Ganesha crashes on all nodes on volume restarts
bugzilla at redhat.com
- [Bugs] [Bug 1406740] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1264849] RFE : Create trash directory only when its is enabled
bugzilla at redhat.com
- [Bugs] [Bug 1405890] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1399989] [Disperse] healing should not start if only data bricks are UP
bugzilla at redhat.com
- [Bugs] [Bug 1405890] Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
bugzilla at redhat.com
- [Bugs] [Bug 1400833] possible memory leak on client when writing to a file while another client issues a truncate
bugzilla at redhat.com
- [Bugs] [Bug 1407014] New: [Eventing]: POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op
bugzilla at redhat.com
- [Bugs] [Bug 1407014] [Eventing]: POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op
bugzilla at redhat.com
- [Bugs] [Bug 1402672] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1407018] New: Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1407018] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1406224] VM pauses due to storage I/O error, when one of the data brick is down with arbiter/replica volume
bugzilla at redhat.com
- [Bugs] [Bug 1406224] VM pauses due to storage I/O error, when one of the data brick is down with arbiter/replica volume
bugzilla at redhat.com
- [Bugs] [Bug 1408101] New: Fix potential socket_poller thread deadlock and resource leak
bugzilla at redhat.com
- [Bugs] [Bug 1408101] Fix potential socket_poller thread deadlock and resource leak
bugzilla at redhat.com
- [Bugs] [Bug 1408104] New: Fix potential socket_poller thread deadlock and resource leak
bugzilla at redhat.com
- [Bugs] [Bug 1405775] GlusterFS process crashed after add-brick
bugzilla at redhat.com
- [Bugs] [Bug 1356960] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1406410] [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
bugzilla at redhat.com
- [Bugs] [Bug 1406249] [GANESHA] Deleting a node from ganesha cluster deletes the volume entry from /etc/ ganesha/ganesha.conf file
bugzilla at redhat.com
- [Bugs] [Bug 1406410] [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
bugzilla at redhat.com
- [Bugs] [Bug 1408110] New: [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
bugzilla at redhat.com
- [Bugs] [Bug 1406249] [GANESHA] Deleting a node from ganesha cluster deletes the volume entry from /etc/ ganesha/ganesha.conf file
bugzilla at redhat.com
- [Bugs] [Bug 1408111] New: [GANESHA] Deleting a node from ganesha cluster deletes the volume entry from /etc/ ganesha/ganesha.conf file
bugzilla at redhat.com
- [Bugs] [Bug 1408110] [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
bugzilla at redhat.com
- [Bugs] [Bug 1408111] [GANESHA] Deleting a node from ganesha cluster deletes the volume entry from /etc/ ganesha/ganesha.conf file
bugzilla at redhat.com
- [Bugs] [Bug 1408110] [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
bugzilla at redhat.com
- [Bugs] [Bug 1408110] [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
bugzilla at redhat.com
- [Bugs] [Bug 1356960] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1408111] [GANESHA] Deleting a node from ganesha cluster deletes the volume entry from /etc/ ganesha/ganesha.conf file
bugzilla at redhat.com
- [Bugs] [Bug 1406410] [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
bugzilla at redhat.com
- [Bugs] [Bug 1406410] [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
bugzilla at redhat.com
- [Bugs] [Bug 1408115] New: Remove-brick rebalance failed while rm -rf is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1408115] Remove-brick rebalance failed while rm -rf is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1408115] Remove-brick rebalance failed while rm -rf is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1407018] Getting the warning message while erasing the gluster " glusterfs-server" package.
bugzilla at redhat.com
- [Bugs] [Bug 1408131] New: Remove tests/distaf
bugzilla at redhat.com
- [Bugs] [Bug 1408131] Remove tests/distaf
bugzilla at redhat.com
- [Bugs] [Bug 1408115] Remove-brick rebalance failed while rm -rf is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1408138] New: Remove-brick rebalance failed while rm -rf is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1406224] VM pauses due to storage I/O error, when one of the data brick is down with arbiter/replica volume
bugzilla at redhat.com
- [Bugs] [Bug 1406224] VM pauses due to storage I/O error, when one of the data brick is down with arbiter/replica volume
bugzilla at redhat.com
- [Bugs] [Bug 1406224] VM pauses due to storage I/O error, when one of the data brick is down with arbiter/replica volume
bugzilla at redhat.com
- [Bugs] [Bug 1408171] New: VM pauses due to storage I/O error, when one of the data brick is down with arbiter/replica volume
bugzilla at redhat.com
- [Bugs] [Bug 1408171] VM pauses due to storage I/O error, when one of the data brick is down with arbiter/replica volume
bugzilla at redhat.com
- [Bugs] [Bug 1408171] VM pauses due to storage I/O error, when one of the data brick is down with arbiter/replica volume
bugzilla at redhat.com
- [Bugs] [Bug 1408171] VM pauses due to storage I/O error, when one of the data brick is down with arbiter/replica volume
bugzilla at redhat.com
- [Bugs] [Bug 1356960] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1393316] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1406601] VM for Dashboard - UNCC
bugzilla at redhat.com
- [Bugs] [Bug 1406308] Free xdr-allocated compound request and response arrays
bugzilla at redhat.com
- [Bugs] [Bug 1356960] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1356960] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1393316] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1408217] New: OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1361513] EC: Set/unset dirty flag for all the update operations
bugzilla at redhat.com
- [Bugs] [Bug 1408217] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1356960] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1393316] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1408217] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1408220] New: OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1356960] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1393316] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1408217] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1408220] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1408221] New: OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1360978] [RFE]Reducing number of network round trips
bugzilla at redhat.com
- [Bugs] [Bug 1408220] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1408221] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405576] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405577] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1408115] Remove-brick rebalance failed while rm -rf is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1400613] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405576] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1405577] [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha / in already existing cluster nodes
bugzilla at redhat.com
- [Bugs] [Bug 1406601] VM for Dashboard - UNCC
bugzilla at redhat.com
- [Bugs] [Bug 1408110] [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
bugzilla at redhat.com
- [Bugs] [Bug 1393316] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1393316] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1393316] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1408111] [GANESHA] Deleting a node from ganesha cluster deletes the volume entry from /etc/ ganesha/ganesha.conf file
bugzilla at redhat.com
- [Bugs] [Bug 1406916] RFE for setting multiple gluster options
bugzilla at redhat.com
- [Bugs] [Bug 1408359] New: `quota list` command displays 'N/A' when root of the volume is empty
bugzilla at redhat.com
- [Bugs] [Bug 1408359] `quota list` command displays 'N/A' when root of the volume is empty
bugzilla at redhat.com
- [Bugs] [Bug 1408359] `quota list` command displays 'N/A' when root of the volume is empty
bugzilla at redhat.com
- [Bugs] [Bug 1408362] New: Need a VM for serving nightly builds
bugzilla at redhat.com
- [Bugs] [Bug 1408363] New: Need a VM for signing packages
bugzilla at redhat.com
- [Bugs] [Bug 1408364] New: Let' s have failurestat.gluster.org pointing to the same IP as fstat.gluster.org
bugzilla at redhat.com
- [Bugs] [Bug 1406739] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1406740] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1406739] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1408395] New: [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408395] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408395] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408395] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408115] Remove-brick rebalance failed while rm -rf is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1408138] Remove-brick rebalance failed while rm -rf is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1408414] New: Remove-brick rebalance failed while rm -rf is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1408414] Remove-brick rebalance failed while rm -rf is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1408414] Remove-brick rebalance failed while rm -rf is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1408431] New: GlusterD is not starting after multiple restarts on one node and parallel volume set options from other node.
bugzilla at redhat.com
- [Bugs] [Bug 1408431] GlusterD is not starting after multiple restarts on one node and parallel volume set options from other node.
bugzilla at redhat.com
- [Bugs] [Bug 1408431] GlusterD is not starting after multiple restarts on one node and parallel volume set options from other node.
bugzilla at redhat.com
- [Bugs] [Bug 1378842] [RFE] 'gluster volume get' should implement the way to retrieve volume options using the volume name 'all'
bugzilla at redhat.com
- [Bugs] [Bug 1405951] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1405955] NFS-Ganesha: Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
bugzilla at redhat.com
- [Bugs] [Bug 1378842] [RFE] 'gluster volume get' should implement the way to retrieve volume options using the volume name 'all'
bugzilla at redhat.com
- [Bugs] [Bug 1365822] [RFE] cli command to get max supported cluster.op-version
bugzilla at redhat.com
- [Bugs] [Bug 1406947] FB developers need push rights on release-3.8-fb branch
bugzilla at redhat.com
- [Bugs] [Bug 1404105] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1406411] Add-brick command fails when one of the replica brick is down
bugzilla at redhat.com
- [Bugs] [Bug 1313838] Tiering as separate process and in v status moving tier task to tier process
bugzilla at redhat.com
- [Bugs] [Bug 1404105] Incorrect incrementation of volinfo refcnt during volume start
bugzilla at redhat.com
- [Bugs] [Bug 1408660] New: Setup a CentOS 7 VM to test split-brain-favorite-child-policy.t failures
bugzilla at redhat.com
- [Bugs] [Bug 1406411] Add-brick command fails when one of the replica brick is down
bugzilla at redhat.com
- [Bugs] [Bug 1408680] New: [FEAT] FDL: Decouple metadata and data parts of fdl
bugzilla at redhat.com
- [Bugs] [Bug 1158654] [FEAT] Journal Based Replication (JBR - formerly NSR)
bugzilla at redhat.com
- [Bugs] [Bug 1408680] [FEAT] FDL: Decouple metadata and data parts of fdl
bugzilla at redhat.com
- [Bugs] [Bug 1408660] Setup a CentOS 7 VM to test split-brain-favorite-child-policy.t failures
bugzilla at redhat.com
- [Bugs] [Bug 1408660] Setup a CentOS 7 VM to test split-brain-favorite-child-policy.t failures
bugzilla at redhat.com
- [Bugs] [Bug 1404678] [geo-rep]: Config commands fail when the status is 'Created'
bugzilla at redhat.com
- [Bugs] [Bug 1404678] [geo-rep]: Config commands fail when the status is 'Created'
bugzilla at redhat.com
- [Bugs] [Bug 1406916] RFE for setting multiple gluster options
bugzilla at redhat.com
- [Bugs] [Bug 1408712] New: with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408712] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408171] VM pauses due to storage I/O error, when one of the data brick is down with arbiter/replica volume
bugzilla at redhat.com
- [Bugs] [Bug 1408171] VM pauses due to storage I/O error, when one of the data brick is down with arbiter/replica volume
bugzilla at redhat.com
- [Bugs] [Bug 1408712] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408712] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1379655] Recording (ffmpeg) processes on FUSE get hung
bugzilla at redhat.com
- [Bugs] [Bug 1355846] Data corruption when disabling sharding
bugzilla at redhat.com
- [Bugs] [Bug 1403984] Node node high CPU - healing entries increasing
bugzilla at redhat.com
- [Bugs] [Bug 1403984] Node node high CPU - healing entries increasing
bugzilla at redhat.com
- [Bugs] [Bug 1379655] Recording (ffmpeg) processes on FUSE get hung
bugzilla at redhat.com
- [Bugs] [Bug 1408755] New: Remove tests/basic/rpm.t
bugzilla at redhat.com
- [Bugs] [Bug 1408712] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408757] New: Fix failure of split-brain-favorite-child-policy.t in CentOS7
bugzilla at redhat.com
- [Bugs] [Bug 1408757] Fix failure of split-brain-favorite-child-policy.t in CentOS7
bugzilla at redhat.com
- [Bugs] [Bug 1408757] Fix failure of split-brain-favorite-child-policy.t in CentOS7
bugzilla at redhat.com
- [Bugs] [Bug 1408757] Fix failure of split-brain-favorite-child-policy.t in CentOS7
bugzilla at redhat.com
- [Bugs] [Bug 1408660] Setup a CentOS 7 VM to test split-brain-favorite-child-policy.t failures
bugzilla at redhat.com
- [Bugs] [Bug 1408758] New: tests/bugs/glusterd/bug-913555.t fails spuriously
bugzilla at redhat.com
- [Bugs] [Bug 1408758] tests/bugs/glusterd/bug-913555.t fails spuriously
bugzilla at redhat.com
- [Bugs] [Bug 1393316] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1408755] Remove tests/basic/rpm.t
bugzilla at redhat.com
- [Bugs] [Bug 1408755] Remove tests/basic/rpm.t
bugzilla at redhat.com
- [Bugs] [Bug 1408395] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408712] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408395] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408757] Fix failure of split-brain-favorite-child-policy.t in CentOS7
bugzilla at redhat.com
- [Bugs] [Bug 1408758] tests/bugs/glusterd/bug-913555.t fails spuriously
bugzilla at redhat.com
- [Bugs] [Bug 1408395] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408770] New: [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408770] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408770] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408770] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408395] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408770] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408772] New: [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408772] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408772] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408772] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408773] New: Remove afr related volume option from jbr volfile
bugzilla at redhat.com
- [Bugs] [Bug 1408773] Remove afr related volume option from jbr volfile
bugzilla at redhat.com
- [Bugs] [Bug 1408773] Remove afr related volume option from jbr volfile
bugzilla at redhat.com
- [Bugs] [Bug 1158654] [FEAT] Journal Based Replication (JBR - formerly NSR)
bugzilla at redhat.com
- [Bugs] [Bug 1408773] Remove afr related volume option from jbr volfile
bugzilla at redhat.com
- [Bugs] [Bug 1408776] New: Jbr failed to replicate after converting a replica volume to jbr volume
bugzilla at redhat.com
- [Bugs] [Bug 1408776] Jbr failed to replicate after converting a replica volume to jbr volume
bugzilla at redhat.com
- [Bugs] [Bug 1408781] New: Create a new netbsd6 node
bugzilla at redhat.com
- [Bugs] [Bug 1408784] New: Failed to build on MacOSX
bugzilla at redhat.com
- [Bugs] [Bug 1408712] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408785] New: with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408785] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408785] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408712] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408785] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408786] New: with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408786] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408786] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408786] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408785] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408781] Create a new netbsd6 node
bugzilla at redhat.com
- [Bugs] [Bug 1378842] [RFE] 'gluster volume get' should implement the way to retrieve volume options using the volume name 'all'
bugzilla at redhat.com
- [Bugs] [Bug 1406308] Free xdr-allocated compound request and response arrays
bugzilla at redhat.com
- [Bugs] [Bug 1406740] Fix spurious failure in tests/bugs/replicate/bug-1402730.t
bugzilla at redhat.com
- [Bugs] [Bug 1312771] Expose o-direct option in posix xlator via 'volume set' command
bugzilla at redhat.com
- [Bugs] [Bug 1370410] [granular entry sh] - Provide a CLI to enable/ disable the feature that checks that there are no heals pending before allowing the operation
bugzilla at redhat.com
- [Bugs] [Bug 1408431] GlusterD is not starting after multiple restarts on one node and parallel volume set options from other node.
bugzilla at redhat.com
- [Bugs] [Bug 1408431] GlusterD is not starting after multiple restarts on one node and parallel volume set options from other node.
bugzilla at redhat.com
- [Bugs] [Bug 1408431] GlusterD is not starting after multiple restarts on one node and parallel volume set options from other node.
bugzilla at redhat.com
- [Bugs] [Bug 1356960] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1356960] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1406914] RFE for setting global volume options
bugzilla at redhat.com
- [Bugs] [Bug 1406914] RFE for setting global volume options
bugzilla at redhat.com
- [Bugs] [Bug 1408712] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408712] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1406914] RFE for setting global volume options
bugzilla at redhat.com
- [Bugs] [Bug 1408809] New: [Perf] : significant Performance regression seen with disperse volume when compared with 3.1.3
bugzilla at redhat.com
- [Bugs] [Bug 1408809] [Perf] : significant Performance regression seen with disperse volume when compared with 3.1.3
bugzilla at redhat.com
- [Bugs] [Bug 1408809] [Perf] : significant Performance regression seen with disperse volume when compared with 3.1.3
bugzilla at redhat.com
- [Bugs] [Bug 1408809] [Perf] : significant Performance regression seen with disperse volume when compared with 3.1.3
bugzilla at redhat.com
- [Bugs] [Bug 1408809] [Perf] : significant Performance regression seen with disperse volume when compared with 3.1.3
bugzilla at redhat.com
- [Bugs] [Bug 1408757] Fix failure of split-brain-favorite-child-policy.t in CentOS7
bugzilla at redhat.com
- [Bugs] [Bug 1408395] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408770] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408772] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408820] New: [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408820] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408820] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408776] Jbr failed to replicate after converting a replica volume to jbr volume
bugzilla at redhat.com
- [Bugs] [Bug 1158654] [FEAT] Journal Based Replication (JBR - formerly NSR)
bugzilla at redhat.com
- [Bugs] [Bug 1408776] Jbr failed to replicate after converting a replica volume to jbr volume
bugzilla at redhat.com
- [Bugs] [Bug 1408773] Remove afr related volume option from jbr volfile
bugzilla at redhat.com
- [Bugs] [Bug 1158654] [FEAT] Journal Based Replication (JBR - formerly NSR)
bugzilla at redhat.com
- [Bugs] [Bug 1408776] Jbr failed to replicate after converting a replica volume to jbr volume
bugzilla at redhat.com
- [Bugs] [Bug 1158654] [FEAT] Journal Based Replication (JBR - formerly NSR)
bugzilla at redhat.com
- [Bugs] [Bug 1408776] Jbr failed to replicate after converting a replica volume to jbr volume
bugzilla at redhat.com
- [Bugs] [Bug 1408755] Remove tests/basic/rpm.t
bugzilla at redhat.com
- [Bugs] [Bug 1408131] Remove tests/distaf
bugzilla at redhat.com
- [Bugs] [Bug 1356960] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1408431] GlusterD is not starting after multiple restarts on one node and parallel volume set options from other node.
bugzilla at redhat.com
- [Bugs] [Bug 1408414] Remove-brick rebalance failed while rm -rf is in progress
bugzilla at redhat.com
- [Bugs] [Bug 1408928] New: [RFE] Location of SSL/ TLS certs & key files should be configurable
bugzilla at redhat.com
- [Bugs] [Bug 1408928] [RFE] Location of SSL/ TLS certs & key files should be configurable
bugzilla at redhat.com
- [Bugs] [Bug 1408770] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408785] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408772] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408770] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408772] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408820] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408820] [Arbiter] After Killing a brick writes drastically slow down
bugzilla at redhat.com
- [Bugs] [Bug 1408776] Jbr failed to replicate after converting a replica volume to jbr volume
bugzilla at redhat.com
- [Bugs] [Bug 1408785] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1393316] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1302944] RFE: Need a command to check op-version compatibility of clients
bugzilla at redhat.com
- [Bugs] [Bug 1335029] set errno in case of inode_link failures
bugzilla at redhat.com
- [Bugs] [Bug 1313838] Tiering as separate process and in v status moving tier task to tier process
bugzilla at redhat.com
- [Bugs] [Bug 1401812] RFE: Make readdirp parallel in dht
bugzilla at redhat.com
- [Bugs] [Bug 1401812] RFE: Make readdirp parallel in dht
bugzilla at redhat.com
- [Bugs] [Bug 1401812] RFE: Make readdirp parallel in dht
bugzilla at redhat.com
- [Bugs] [Bug 1313838] Tiering as separate process and in v status moving tier task to tier process
bugzilla at redhat.com
- [Bugs] [Bug 1396004] RFE: An administrator friendly way to determine rebalance completion time
bugzilla at redhat.com
- [Bugs] [Bug 1302944] RFE: Need a command to check op-version compatibility of clients
bugzilla at redhat.com
- [Bugs] [Bug 1409078] New: RFE: Need a command to check op-version compatibility of clients
bugzilla at redhat.com
- [Bugs] [Bug 1409078] RFE: Need a command to check op-version compatibility of clients
bugzilla at redhat.com
- [Bugs] [Bug 1409078] RFE: Need a command to check op-version compatibility of clients
bugzilla at redhat.com
- [Bugs] [Bug 1403120] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1403120] Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
bugzilla at redhat.com
- [Bugs] [Bug 1408786] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1408757] Fix failure of split-brain-favorite-child-policy.t in CentOS7
bugzilla at redhat.com
- [Bugs] [Bug 1409186] New: Dict_t leak in dht_migration_complete_check_task and dht_rebalance_inprogress_task
bugzilla at redhat.com
- [Bugs] [Bug 1409186] Dict_t leak in dht_migration_complete_check_task and dht_rebalance_inprogress_task
bugzilla at redhat.com
- [Bugs] [Bug 1409186] Dict_t leak in dht_migration_complete_check_task and dht_rebalance_inprogress_task
bugzilla at redhat.com
- [Bugs] [Bug 1409186] Dict_t leak in dht_migration_complete_check_task and dht_rebalance_inprogress_task
bugzilla at redhat.com
- [Bugs] [Bug 1402728] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1399468] Wrong value in Last Synced column during Hybrid Crawl
bugzilla at redhat.com
- [Bugs] [Bug 1402728] Worker restarts on log-rsync-performance config update
bugzilla at redhat.com
- [Bugs] [Bug 1399468] Wrong value in Last Synced column during Hybrid Crawl
bugzilla at redhat.com
- [Bugs] [Bug 1409189] New: Failed to set TCP_USER_TIMEOUT msgs seen in logs
bugzilla at redhat.com
- [Bugs] [Bug 1409191] New: [Perf] : Sequential and Random Writes are off target by 12% and 22% respectively on EC backed volumes over FUSE
bugzilla at redhat.com
- [Bugs] [Bug 1406878] ec prove tests fail in FB build environment.
bugzilla at redhat.com
- [Bugs] [Bug 1408786] with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
bugzilla at redhat.com
- [Bugs] [Bug 1401023] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1401032] OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
bugzilla at redhat.com
- [Bugs] [Bug 1409202] New: Warning messages throwing when EC volume offline brick comes up are difficult to understand for end user.
bugzilla at redhat.com
- [Bugs] [Bug 1409206] New: Extra lookup/ fstats are sent over the network when a brick is down.
bugzilla at redhat.com
- [Bugs] [Bug 1409206] Extra lookup/ fstats are sent over the network when a brick is down.
bugzilla at redhat.com
- [Bugs] [Bug 1409206] Extra lookup/ fstats are sent over the network when a brick is down.
bugzilla at redhat.com
- [Bugs] [Bug 1409191] [Perf] : Sequential and Random Writes are off target by 12% and 22% respectively on EC backed volumes over FUSE
bugzilla at redhat.com
- [Bugs] [Bug 1393316] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1393316] OOM Kill on client when heal is in progress on 1*(2+1) arbiter volume
bugzilla at redhat.com
- [Bugs] [Bug 1378842] [RFE] 'gluster volume get' should implement the way to retrieve volume options using the volume name 'all'
bugzilla at redhat.com
- [Bugs] [Bug 1401812] RFE: Make readdirp parallel in dht
bugzilla at redhat.com
- [Bugs] [Bug 1401812] RFE: Make readdirp parallel in dht
bugzilla at redhat.com
- [Bugs] [Bug 1401812] RFE: Make readdirp parallel in dht
bugzilla at redhat.com
- [Bugs] [Bug 1401812] RFE: Make readdirp parallel in dht
bugzilla at redhat.com
- [Bugs] [Bug 1401812] RFE: Make readdirp parallel in dht
bugzilla at redhat.com
Last message date:
Sat Dec 31 11:44:28 UTC 2016
Archived on: Sat Dec 31 11:44:31 UTC 2016
This archive was generated by
Pipermail 0.09 (Mailman edition).