[Gluster-users] glusterfs-3.7.4 released
Kaushal M
kshlmster at gmail.com
Fri Sep 4 09:10:41 UTC 2015
Hi all,
I'm pleased to announce the release of GlusterFS-3.7.4. This release
includes 92 changes after 3.7.3. The list of fixed bugs is included
below.
Tarball and RPMs can be downloaded from
http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.4/
Ubuntu debs are available from
https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7
Debian Unstable (sid) packages have been updated and should be
available from default repos.
NetBSD has updated ports at
ftp://ftp.netbsd.org/pub/pkgsrc/current/pkgsrc/filesystems/glusterfs/README.html
Arch Linux package has also been been updated and is available from
the community repository.
Upgrade notes from 3.7.2 and earlier
====================================
GlusterFS uses insecure ports by default from release v3.7.3. This
causes problems when upgrading from release 3.7.2 and below to 3.7.3
and above. Performing the following steps before upgrading helps avoid
problems.
- Enable insecure ports for all volumes.
```
gluster volume set <VOLNAME> server.allow-insecure on
gluster volume set <VOLNAME> client.bind-insecure on
```
- Enable insecure ports for GlusterD. Set the following line in
`/etc/glusterfs/glusterd.vol`
```
option rpc-auth-allow-insecure on
```
This needs to be done on all the members in the cluster.
Fixed bugs
==========
- 1223945 Scripts/Binaries are not installed with +x bit
- 1228216 Disperse volume: gluster volume status doesn't show shd status
- 1228521 USS: Take ref on root inode
- 1231678 geo-rep: gverify.sh throws error if slave_host entry is not
added to know_hosts file
- 1235202 tiering: tier daemon not restarting during volume/glusterd restart
- 1235964 Disperse volume: FUSE I/O error after self healing the
failed disk files
- 1236050 Disperse volume: fuse mount hung after self healing
- 1238706 snapd/quota/nfs daemon's runs on the node, even after that
node was detached from trusted storage pool
- 1240920 libgfapi: Segfault seen when glfs_*() methods are invoked
with invalid glfd
- 1242536 Data Tiering: Rename of file is not heating up the file
- 1243384 EC volume: Replace bricks is not healing version of root directory
- 1244721 glusterd: Porting left out log messages to new logging API
- 1244724 quota: allowed to set soft-limit %age beyond 100%
- 1245922 [SNAPSHOT] : Correction required in output message after
initilalising snap_scheduler
- 1245923 [Snapshot] Scheduler should check vol-name exists or not
before adding scheduled jobs
- 1247014 sharding - Fix unlink of sparse files
- 1247153 SSL improvements: ECDH, DH, CRL, and accessible options
- 1247551 forgotten inodes are not being signed
- 1247615 tests/bugs/replicate/bug-1238508-self-heal.t fails in 3.7 branch
- 1247833 sharding - OS installation on vm image hangs on a sharded volume
- 1247850 Glusterfsd crashes because of thread-unsafe code in gf_authenticate
- 1247882 [geo-rep]: killing brick from replica pair makes geo-rep
session faulty with Traceback "ChangelogException"""
- 1247910 Gluster peer probe with negative num
- 1247917 ./tests/basic/volume-snapshot.t spurious fail causing glusterd crash.
- 1248325 quota: In enforcer, caching parents in ctx during build
ancestry is not working
- 1248337 Data Tiering: Change the error message when a detach-tier
status is issued on a non-tier volume
- 1248450 rpc: check for unprivileged port should start at 1024 and
not beyond 1024
- 1248962 quota/marker: errors in log file 'Failed to get metadata for'
- 1249461 'unable to get transaction op-info' error seen in glusterd
log while executing gluster volume status command
- 1249547 [geo-rep]: rename followed by deletes causes ESTALE
- 1249921 [upgrade] After upgrade from 3.5 to 3.6 onwards version,
bumping up op-version failed
- 1249925 DHT-rebalance: Rebalance hangs on distribute volume when
glusterd is stopped on peer node
- 1249983 Rebalance is failing in test cluster framework.
- 1250601 nfs-ganesha: remove the entry of the deleted node
- 1250628 nfs-ganesha: ganesha-ha.sh --status is actually same as "pcs status"""
- 1250809 Enable multi-threaded epoll for glusterd process
- 1250810 Make ping-timeout option configurable at a volume-level
- 1250834 Sharding - Excessive logging of messages of the kind 'Failed
to get trusted.glusterfs.shard.file-size for
bf292f5b-6dd6-45a8-b03c-aaf5bb973c50'
- 1250864 ec returns EIO error in cases where a more specific error
could be returned
- 1251106 sharding - Renames on non-sharded files failing with ENOMEM
- 1251380 statfs giving incorrect values for AFR arbiter volumes
- 1252272 rdma : pending - porting log messages to a new framework
- 1252297 Quota: volume-reset shouldn't remove quota-deem-statfs,
unless explicitly specified, when quota is enabled.
- 1252348 using fop's dict for resolving causes problems
- 1252680 probing and detaching a peer generated a CRITICAL error -
"Could not find peer"" in glusterd logs"
- 1252727 tiering: Tier daemon stopped prior to graph switch.
- 1252873 gluster vol quota dist-vol list is not displaying quota informatio.
- 1252903 Fix invalid logic in tier.t
- 1252907 Unable to demote files in tiered volumes when cold tier is EC.
- 1253148 gf_store_save_value fails to check for errors, leading to
emptying files in /var/lib/glusterd/
- 1253151 Sharding - Individual shards' ownership differs from that of
the original file
- 1253160 while re-configuring the scrubber frequency, scheduling is
not happening based on current time
- 1253165 glusterd services are not handled properly when re
configuring services
- 1253212 snapd crashed due to stack overflow
- 1253260 posix_make_ancestryfromgfid doesn't set op_errno
- 1253542 rebalance stuck at 0 byte when auth.allow is set
- 1253607 gluster snapshot status --xml gives back unexpected non xml output
- 1254419 nfs-ganesha: new volume creation tries to bring up
glusterfs-nfs even when nfs-ganesha is already on
- 1254436 logging: Revert usage of global xlator for log buffer
- 1254437 tiering: rename fails with "Device or resource busy"" error message"
- 1254438 Tiering: segfault when trying to rename a file
- 1254439 Quota list is not working on tiered volume.
- 1254442 tiering/snapshot: Tier daemon failed to start during volume
start after restoring into a tiered volume from a non-tiered volume.
- 1254468 Data Tiering : Some tier xlator_fops translate to the default fops
- 1254494 nfs-ganesha: refresh-config stdout output does not make sense
- 1254503 fuse: check return value of setuid
- 1254607 rpc: Address issues with transport object reference and leak
- 1254865 non-default symver macros are incorrect
- 1255244 Quota: After rename operation , gluster v quota <volname>
list-objects command give incorrect no. of files in output
- 1255311 Snapshot: When soft limit is reached, auto-delete is enable,
create snapshot doesn't logs anything in log files
- 1255351 fail the fops if inode context get fails
- 1255604 Not able to recover the corrupted file on Replica volume
- 1255605 Scrubber log should mark file corrupted message as Alert not
as information
- 1255636 Remove unwanted tests from volume-snapshot.t
- 1255644 quota : display the size equivalent to the soft limit
percentage in gluster v quota <volname> list* command
- 1255690 AFR: gluster v restart force or brick process restart
doesn't heal the files
- 1255698 Write performance from a Windows client on 3-way replicated
volume decreases substantially when one brick in the replica set is
brought down
- 1256265 Data Loss:Remove brick commit passing when remove-brick
process has not even started(due to killing glusterd)
- 1256283 [remove-brick]: Creation of file from NFS writes to the
decommissioned subvolume and subsequent lookup from fuse creates a
link
- 1256307 [Backup]: Glusterfind session entry persists even after
volume is deleted
- 1256485 [Snapshot]/[NFS-Ganesha] mount point hangs upon snapshot
create-activate and 'cd' into .snaps directory
- 1256605 `gluster volume heal <vol-name> split-brain' changes
required for entry-split-brain
- 1256616 libgfapi : adding follow flag to glfs_h_lookupat()
- 1256669 Though scrubber settings changed on one volume log shows all
volumes scrubber information
- 1256702 remove-brick: avoid mknod op falling on decommissioned brick
even after fix-layout has happened on parent directory
- 1256909 Unable to examine file in metadata split-brain after setting
`replica.split-brain-choice' attribute to a particular replica
- 1257193 protocol server : Pending - porting log messages to a new framework
- 1257204 sharding - VM image size as seen from the mount keeps
growing beyond configured size on a sharded volume
- 1257441 marker: set loc.parent if NULL
- 1257881 Quota list on a volume hangs after glusterd restart an a node.
- 1258306 bug-1238706-daemons-stop-on-peer-cleanup.t fails occasionally
- 1258344 tests: rebasing bad tests from mainline branch to release-3.7 branch
- 1258353 Sharding - Unlink of VM images can sometimes fail with EINVAL
More information about the Gluster-users
mailing list