[Gluster-users] [Gluster-devel] glusterfs-3.7.6 released

Atin Mukherjee amukherj at redhat.com
Tue Nov 17 06:33:52 UTC 2015


There is one more known issues which we forgot to include in the release
notes:

* 1282322 - Volume start fails post add-brick on a volume which is not
started

- If you upgrade the cluster partially and perform an add-brick
operation on a volume which is stopped, the add-brick would succeed
however the volume start would fail. Workaround would be to toggle a
volume tunable with volume set command and post that you could start the
volume.

Thanks,
Atin

On 11/16/2015 04:24 PM, Raghavendra Talur wrote:
> Hi all,
> 
> I'm pleased to announce the release of GlusterFS-3.7.6. 
> The list of fixed bugs is included below.
> 
> Tarball and RPMs can be downloaded from
> http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.6/
> 
> Ubuntu debs are available from
> https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7
> 
> Debian Unstable (sid) packages have been updated and should be
> available from default repos.
> 
> NetBSD has updated ports at
> ftp://ftp.netbsd.org/pub/pkgsrc/current/pkgsrc/filesystems/glusterfs/README.html
> 
> 
>       Bugs Fixed:
> 
>   * 1057295 <https://bugzilla.redhat.com/1057295>: glusterfs doesn't
>     include firewalld rules
>   * 1219399 <https://bugzilla.redhat.com/1219399>: NFS interoperability
>     problem: Gluster Striped-Replicated can't read on vmware esxi 5.x
>     NFS client
>   * 1221957 <https://bugzilla.redhat.com/1221957>: Fully support
>     data-tiering in 3.7.x, remove out of 'experimental' status
>   * 1258197 <https://bugzilla.redhat.com/1258197>: gNFSd: NFS mount
>     fails with "Remote I/O error"
>   * 1258242 <https://bugzilla.redhat.com/1258242>: Data Tiering:
>     detach-tier start force command not available on a tier
>     volume(unlike which is possible in force remove-brick)
>   * 1258833 <https://bugzilla.redhat.com/1258833>: Data Tiering:
>     Disallow attach tier on a volume where any rebalance process is in
>     progress to avoid deadlock(like remove brick commit pending etc)
>   * 1259167 <https://bugzilla.redhat.com/1259167>: GF_LOG_NONE logs always
>   * 1261146 <https://bugzilla.redhat.com/1261146>: Legacy files
>     pre-existing tier attach must be promoted
>   * 1261732 <https://bugzilla.redhat.com/1261732>: Disperse volume: df
>     -h on a nfs mount throws Invalid argument error
>   * 1261744 <https://bugzilla.redhat.com/1261744>: Tier/shd: Tracker bug
>     for tier and shd compatibility
>   * 1261758 <https://bugzilla.redhat.com/1261758>: Tiering/glusted:
>     volume status failed after detach tier start
>   * 1262860 <https://bugzilla.redhat.com/1262860>: Data Tiering: Tiering
>     deamon is seeing each part of a file in a Disperse cold volume as a
>     different file
>   * 1265623 <https://bugzilla.redhat.com/1265623>: Data
>     Tiering:Promotions and demotions fail after quota hard limits are
>     hit for a tier volume
>   * 1266836 <https://bugzilla.redhat.com/1266836>: AFR : fuse,nfs mount
>     hangs when directories with same names are created and deleted
>     continuously
>   * 1266880 <https://bugzilla.redhat.com/1266880>: Tiering: unlink
>     failed with error "Invaid argument"
>   * 1267816 <https://bugzilla.redhat.com/1267816>: quota/marker: marker
>     code cleanup
>   * 1269035 <https://bugzilla.redhat.com/1269035>: Data Tiering:Throw a
>     warning when user issues a detach-tier commit command
>   * 1269125 <https://bugzilla.redhat.com/1269125>: Data
>     Tiering:Regression: automation blocker:vol status for tier volumes
>     using xml format is not working
>   * 1269344 <https://bugzilla.redhat.com/1269344>: tier/cli: number of
>     bricks remains the same in v info --xml
>   * 1269501 <https://bugzilla.redhat.com/1269501>: Self-heal daemon
>     crashes when bricks godown at the time of data heal
>   * 1269530 <https://bugzilla.redhat.com/1269530>:
>     Core:Blocker:Segmentation fault when using fallocate command on a
>     gluster volume
>   * 1269730 <https://bugzilla.redhat.com/1269730>: Sharding - Send inode
>     forgets on /all/ shards if/when the protocol layer (FUSE/Gfapi) at
>     the top sends a forget on the actual file
>   * 1270123 <https://bugzilla.redhat.com/1270123>: Data Tiering:
>     Database locks observed on tiered volumes on continous writes to a file
>   * 1270527 <https://bugzilla.redhat.com/1270527>: add policy mechanism
>     for promotion and demotion
>   * 1270769 <https://bugzilla.redhat.com/1270769>: quota/marker: dir
>     count in inode quota is not atomic
>   * 1271204 <https://bugzilla.redhat.com/1271204>: Introduce priv dump
>     in shard xlator for better debugging
>   * 1271249 <https://bugzilla.redhat.com/1271249>: tiering:compiler
>     warning with gcc v5.1.1
>   * 1271490 <https://bugzilla.redhat.com/1271490>: rm -rf on
>     /run/gluster/vol// is not showing quota output header for other
>     quota limit applied directories
>   * 1271540 <https://bugzilla.redhat.com/1271540>: RHEL7/systemd : can't
>     have server in debug mode anymore
>   * 1271627 <https://bugzilla.redhat.com/1271627>: Creating a already
>     deleted snapshot-clone deletes the corresponding snapshot.
>   * 1271967 <https://bugzilla.redhat.com/1271967>: ECVOL: glustershd log
>     grows quickly and fills up the root volume
>   * 1272036 <https://bugzilla.redhat.com/1272036>: Data Tiering:getting
>     failed to fsync on germany-hot-dht (Structure needs cleaning) warning
>   * 1272331 <https://bugzilla.redhat.com/1272331>: Tier: Do not
>     promote/demote files on which POSIX locks are held
>   * 1272334 <https://bugzilla.redhat.com/1272334>: Data
>     Tiering:Promotions fail when brick of EC (disperse) cold layer are down
>   * 1272398 <https://bugzilla.redhat.com/1272398>: Data Tiering:Lot of
>     Promotions/Demotions failed error messages
>   * 1273246 <https://bugzilla.redhat.com/1273246>: Tier xattr name is
>     misleading (trusted.tier-gfid)
>   * 1273334 <https://bugzilla.redhat.com/1273334>: Fix in afr
>     transaction code
>   * 1274101 <https://bugzilla.redhat.com/1274101>: need a way to
>     pause/stop tiering to take snapshot
>   * 1274600 <https://bugzilla.redhat.com/1274600>: [sharding+geo-rep]:
>     On existing slave mount, reading files fails to show sharded file
>     content
>   * 1275157 <https://bugzilla.redhat.com/1275157>: Reduce 'CTR disabled'
>     brick log message from ERROR to INFO/DEBUG
>   * 1275483 <https://bugzilla.redhat.com/1275483>: Data Tiering:heat
>     counters not getting reset and also internal ops seem to be heating
>     the files
>   * 1275502 <https://bugzilla.redhat.com/1275502>: [Tier]: Typo in the
>     output while setting the wrong value of low/hi watermark
>   * 1275921 <https://bugzilla.redhat.com/1275921>: Disk usage
>     mismatching after self-heal
>   * 1276029 <https://bugzilla.redhat.com/1276029>: Upgrading a subset of
>     cluster to 3.7.5 leads to issues with glusterd commands
>   * 1276060 <https://bugzilla.redhat.com/1276060>: dist-geo-rep: geo-rep
>     status shows Active/Passive even when all the gsync processes in a
>     node are killed
>   * 1276208 <https://bugzilla.redhat.com/1276208>: [RFE] 'gluster volume
>     help' output could be sorted alphabetically
>   * 1276244 <https://bugzilla.redhat.com/1276244>: gluster-nfs : Server
>     crashed due to an invalid reference
>   * 1276550 <https://bugzilla.redhat.com/1276550>: FUSE clients in a
>     container environment hang and do not recover post losing
>     connections to all bricks
>   * 1277080 <https://bugzilla.redhat.com/1277080>: quota: set quota
>     version for files/directories
>   * 1277394 <https://bugzilla.redhat.com/1277394>: Wrong value of
>     snap-max-hard-limit observed in 'gluster volume info'.
>   * 1277587 <https://bugzilla.redhat.com/1277587>: Data Tiering:tiering
>     deamon crashes when trying to heat the file
>   * 1277590 <https://bugzilla.redhat.com/1277590>: Tier : Move common
>     functions into tier.rc
>   * 1277800 <https://bugzilla.redhat.com/1277800>: [New] - Message
>     displayed after attach tier is misleading
>   * 1277984 <https://bugzilla.redhat.com/1277984>: Upgrading to 3.7.-5-5
>     has changed volume to distributed disperse
>   * 1278578 <https://bugzilla.redhat.com/1278578>: move mount-nfs-auth.t
>     to failed tests lists
>   * 1278603 <https://bugzilla.redhat.com/1278603>: fix lookup-unhashed
>     for tiered volumes.
>   * 1278640 <https://bugzilla.redhat.com/1278640>: [New] - Files in a
>     tiered volume gets promoted when bitd signs them
>   * 1278744 <https://bugzilla.redhat.com/1278744>: ec-readdir.t is
>     failing consistently
>   * 1278850 <https://bugzilla.redhat.com/1278850>: Tests/tiering:
>     Correct typo in bug-1214222-directories_miising_after_attach_tier.t
>     in bad_tests
> 
> *Known Issues:*
> 
>   * Volume commands fail with "staging failed" message when few nodes in
>     trusted storage pool have 3.7.6 installed and other nodes have 3.7.5
>     installed. Please upgrade all nodes to recover from this error. This
>     issue is not seen if upgrading from 3.7.4 or previous to 3.7.6.
> 
> 
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 


More information about the Gluster-users mailing list