[Gluster-users] [CentOS-devel] Gluster Updates for Storage SIG
Ravishankar N
ravishankar at redhat.com
Tue Feb 10 12:09:33 UTC 2015
On 02/07/2015 01:39 AM, Humble Devassy Chirammal wrote:
> Lala,
> >
> Gluster 3.4 server bits with 3.6 client bits should work fine.
> >
>
> Have you tested this configuration? iic, the 'AFR' module (
> replication) introduced its 'Version 2' implementation in 3.6 which
> is not compatible with its older version. The GlusterFS 3.4 & 3.5
> versions are shipped with "AFR V1" , so I really doubt the mentioned
> configuration will work perfectly. AFR guys can confirm though.
>
> --Humble
>
That is right. Usually newer clients would work with older servers but
3.6 saw a complete rewrite of AFR (afr-v2) which is not fully compatible
with AFR-v1. It would be best to update the all the clients to 3.6 as well.
-Ravi
>
> On Fri, Feb 6, 2015 at 7:01 PM, Lalatendu Mohanty <lmohanty at redhat.com
> <mailto:lmohanty at redhat.com>> wrote:
>
> On 02/06/2015 08:11 AM, Humble Devassy Chirammal wrote:
>> On 02/05/2015 11:56 PM, Nux! wrote:
>>
>> Thanks for sharing.
>> Any idea if 3.6.2 still is compatible with v3.4 servers?
>>
>>
>> >You mean 3.6.2 client bits with v3.4 servers? yes, it should
>> work fine.
>>
>>
>> afacit, this will *not* work and its *not* supported.
>>
>>
>
> Humble,
>
> Gluster 3.4 server bits with 3.6 client bits should work fine.
>
> But I think the reserve (i.e. 3.6 server bits with older client
> bits) are not compatible because of below issues
>
> * Older clients can not mount the newly created volume on 3.6 .
> This is because readdir-ahead will be enabled on the volume by
> default which isn't present in older clients.
> * We can't run rebalance on any volume created with 3.6 bits (
> with or without readdir-ahead) when older clients are
> connected. The rebalance command will error out if older
> clients are connected.
>
> Thanks,
> Lala
>
>>
>>
>> --Humble
>>
>>
>> On Fri, Feb 6, 2015 at 5:02 AM, Lalatendu Mohanty
>> <lmohanty at redhat.com <mailto:lmohanty at redhat.com>> wrote:
>>
>> + gluster-users
>> On 02/05/2015 11:56 PM, Nux! wrote:
>>
>> Thanks for sharing.
>> Any idea if 3.6.2 still is compatible with v3.4 servers?
>>
>>
>> You mean 3.6.2 client bits with v3.4 servers? yes, it should
>> work fine.
>>
>> -Lala
>>
>> --
>> Sent from the Delta quadrant using Borg technology!
>>
>> Nux!
>> www.nux.ro <http://www.nux.ro>
>>
>> ----- Original Message -----
>>
>> From: "Karanbir Singh" <mail-lists at karan.org
>> <mailto:mail-lists at karan.org>>
>> To: "The CentOS developers mailing list."
>> <centos-devel at centos.org
>> <mailto:centos-devel at centos.org>>
>> Sent: Thursday, 5 February, 2015 22:11:53
>> Subject: [CentOS-devel] Gluster Updates for Storage SIG
>> The CentOS Storage SIG, has updated Gluster to 3.6.2
>> in the community
>> testing repos. You can find more information on howto
>> get started with
>> this repo at :
>> http://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart
>>
>> The Following rpms have been updated:
>>
>> CentOS-6
>> i386/glusterfs-3.6.2-2.el6.i386.rpm
>> i386/glusterfs-api-3.6.2-2.el6.i386.rpm
>> i386/glusterfs-api-devel-3.6.2-2.el6.i386.rpm
>> i386/glusterfs-cli-3.6.2-2.el6.i386.rpm
>> i386/glusterfs-devel-3.6.2-2.el6.i386.rpm
>> i386/glusterfs-extra-xlators-3.6.2-2.el6.i386.rpm
>> i386/glusterfs-fuse-3.6.2-2.el6.i386.rpm
>> i386/glusterfs-geo-replication-3.6.2-2.el6.i386.rpm
>> i386/glusterfs-libs-3.6.2-2.el6.i386.rpm
>> i386/glusterfs-rdma-3.6.2-2.el6.i386.rpm
>> i386/glusterfs-resource-agents-3.6.2-2.el6.noarch.rpm
>> i386/glusterfs-server-3.6.2-2.el6.i386.rpm
>>
>> x86_64/glusterfs-3.6.2-2.el6.x86_64.rpm
>> x86_64/glusterfs-api-3.6.2-2.el6.i386.rpm
>> x86_64/glusterfs-api-3.6.2-2.el6.x86_64.rpm
>> x86_64/glusterfs-api-devel-3.6.2-2.el6.i386.rpm
>> x86_64/glusterfs-api-devel-3.6.2-2.el6.x86_64.rpm
>> x86_64/glusterfs-cli-3.6.2-2.el6.x86_64.rpm
>> x86_64/glusterfs-devel-3.6.2-2.el6.i386.rpm
>> x86_64/glusterfs-devel-3.6.2-2.el6.x86_64.rpm
>> x86_64/glusterfs-extra-xlators-3.6.2-2.el6.x86_64.rpm
>> x86_64/glusterfs-fuse-3.6.2-2.el6.x86_64.rpm
>> x86_64/glusterfs-geo-replication-3.6.2-2.el6.x86_64.rpm
>> x86_64/glusterfs-libs-3.6.2-2.el6.i386.rpm
>> x86_64/glusterfs-libs-3.6.2-2.el6.x86_64.rpm
>> x86_64/glusterfs-rdma-3.6.2-2.el6.x86_64.rpm
>> x86_64/glusterfs-resource-agents-3.6.2-2.el6.noarch.rpm
>> x86_64/glusterfs-server-3.6.2-2.el6.x86_64.rpm
>>
>> CentOS-7
>> x86_64/glusterfs-3.6.2-2.el7.x86_64.rpm
>> x86_64/glusterfs-api-3.6.2-2.el7.x86_64.rpm
>> x86_64/glusterfs-api-devel-3.6.2-2.el7.x86_64.rpm
>> x86_64/glusterfs-cli-3.6.2-2.el7.x86_64.rpm
>> x86_64/glusterfs-devel-3.6.2-2.el7.x86_64.rpm
>> x86_64/glusterfs-extra-xlators-3.6.2-2.el7.x86_64.rpm
>> x86_64/glusterfs-fuse-3.6.2-2.el7.x86_64.rpm
>> x86_64/glusterfs-geo-replication-3.6.2-2.el7.x86_64.rpm
>> x86_64/glusterfs-libs-3.6.2-2.el7.x86_64.rpm
>> x86_64/glusterfs-rdma-3.6.2-2.el7.x86_64.rpm
>> x86_64/glusterfs-resource-agents-3.6.2-2.el7.noarch.rpm
>> x86_64/glusterfs-server-3.6.2-2.el7.x86_64.rpm
>>
>>
>> This release fixes the following bugs. Below containt
>> copied from
>> GlusterFS upstream release mail [1].
>>
>> 1184191 - Cluster/DHT : Fixed crash due to null deref
>> 1180404 - nfs server restarts when a snapshot is
>> deactivated
>> 1180411 - CIFS:[USS]: glusterfsd OOM killed when 255
>> snapshots were
>> browsed at CIFS mount and Control+C is issued
>> 1180070 - [AFR] getfattr on fuse mount gives error :
>> Software caused
>> connection abort
>> 1175753 - [readdir-ahead]: indicate EOF for readdirp
>> 1175752 - [USS]: On a successful lookup, snapd logs
>> are filled with
>> Warnings "dict OR key (entry-point) is NULL"
>> 1175749 - glusterfs client crashed while migrating
>> the fds
>> 1179658 - Add brick fails if parent dir of new brick
>> and existing brick
>> is same and volume was accessed using libgfapi and smb.
>> 1146524 - glusterfs.spec.in
>> <http://glusterfs.spec.in> - synch minor diffs with
>> fedora dist-git
>> glusterfs.spec
>> 1175744 - [USS]: Unable to access .snaps after
>> snapshot restore after
>> directories were deleted and recreated
>> 1175742 - [USS]: browsing .snaps directory with CIFS
>> fails with
>> "Invalid argument"
>> 1175739 - [USS]: Non root user who has no access to a
>> directory, from
>> NFS mount, is able to access the files under .snaps
>> under that directory
>> 1175758 - [USS] : Rebalance process tries to connect
>> to snapd and in
>> case when snapd crashes it might affect rebalance process
>> 1175765 - USS]: When snapd is crashed gluster volume
>> stop/delete
>> operation fails making the cluster in inconsistent state
>> 1173528 - Change in volume heal info command output
>> 1166515 - [Tracker] RDMA support in glusterfs
>> 1166505 - mount fails for nfs protocol in rdma volumes
>> 1138385 - [DHT:REBALANCE]: Rebalance failures are
>> seen with error
>> message " remote operation failed: File exists"
>> 1177418 - entry self-heal in 3.5 and 3.6 are not
>> compatible
>> 1170954 - Fix mutex problems reported by coverity scan
>> 1177899 - nfs: ls shows "Permission denied" with
>> root-squash
>> 1175738 - [USS]: data unavailability for a period of
>> time when USS is
>> enabled/disabled
>> 1175736 - [USS]:After deactivating a snapshot trying
>> to access the
>> remaining activated snapshots from NFS mount gives
>> 'Invalid argument' error
>> 1175735 - [USS]: snapd process is not killed once the
>> glusterd comes back
>> 1175733 - [USS]: If the snap name is same as
>> snap-directory than cd to
>> virtual snap directory fails
>> 1175756 - [USS] : Snapd crashed while trying to
>> access the snapshots
>> under .snaps directory
>> 1175755 - SNAPSHOT[USS]:gluster volume set for uss
>> doesnot check any
>> boundaries
>> 1175732 - [SNAPSHOT]: nouuid is appended for every
>> snapshoted brick
>> which causes duplication if the original brick has
>> already nouuid
>> 1175730 - [USS]: creating file/directories under
>> .snaps shows wrong
>> error message
>> 1175754 - [SNAPSHOT]: before the snap is marked to be
>> deleted if the
>> node goes down than the snaps are propagated on other
>> nodes and glusterd
>> hungs
>> 1159484 - ls -alR can not heal the disperse volume
>> 1138897 - NetBSD port
>> 1175728 - [USS]: All uss related logs are reported under
>> /var/log/glusterfs, it makes sense to move it into
>> subfolder
>> 1170548 - [USS] : don't display the snapshots which
>> are not activated
>> 1170921 - [SNAPSHOT]: snapshot should be deactivated
>> by default when
>> created
>> 1175694 - [SNAPSHOT]: snapshoted volume is read only
>> but it shows rw
>> attributes in mount
>> 1161885 - Possible file corruption on dispersed volumes
>> 1170959 - EC_MAX_NODES is defined incorrectly
>> 1175645 - [USS]: Typo error in the description for
>> USS under "gluster
>> volume set help"
>> 1171259 - mount.glusterfs does not understand -n option
>>
>> [1]
>> http://www.gluster.org/pipermail/gluster-devel/2015-January/043617.html
>>
>> --
>> Karanbir Singh
>> +44-207-0999389 | http://www.karan.org/ |
>> twitter.com/kbsingh <http://twitter.com/kbsingh>
>> GnuPG Key : http://www.karan.org/publickey.asc
>> _______________________________________________
>> CentOS-devel mailing list
>> CentOS-devel at centos.org <mailto:CentOS-devel at centos.org>
>> http://lists.centos.org/mailman/listinfo/centos-devel
>>
>> _______________________________________________
>> CentOS-devel mailing list
>> CentOS-devel at centos.org <mailto:CentOS-devel at centos.org>
>> http://lists.centos.org/mailman/listinfo/centos-devel
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150210/5fa9d40b/attachment.html>
More information about the Gluster-users
mailing list