[Gluster-users] glusterfs-3.5.3beta1 has been released for testing
David F. Robinson
david.robinson at corvidtec.com
Mon Oct 6 17:05:29 UTC 2014
You are correct... Typo on my part. It happened when I installed
3.6.0-beta3.
I'll file the bug report so that fuse installation is dependent on attr
being installed... Thanks...
David
------ Original Message ------
From: "Niels de Vos" <ndevos at redhat.com>
To: "David F. Robinson" <david.robinson at corvidtec.com>
Cc: gluster-users at gluster.org; gluster-devel at gluster.org
Sent: 10/6/2014 12:59:56 PM
Subject: Re: [Gluster-users] glusterfs-3.5.3beta1 has been released for
testing
>On Mon, Oct 06, 2014 at 02:30:11PM +0000, David F. Robinson wrote:
>> When I installed the 3.5.3beta on my HPC cluster, I get the following
>> warnings during the mounts:
>>
>> WARNING: getfattr not found, certain checks will be skipped..
>> I do not have attr installed on my compute nodes. Is this something
>>that I
>> need in order for gluster to work properly or can this safely be
>>ignored?
>
>These checks are done in the /sbin/mount.glusterfs shell script. One of
>the things it does, is preventing users from mounting a volume on a
>sub-directory of a brick.
>
>It is not required to have 'attr' installed on clients that mount a
>Gluster volume, but it surely is recommended for most users.
>
>This change introduces the warning:
>- http://review.gluster.org/5931
> mount.glusterfs: getopts support and cleanup
>
>However, I do not think this is in glusterfs-3.5.3beta1. You probably
>have installed a glusterfs-3.6.0 beta on this particular client system.
>
>I think this is a bug in the packaging. It would be much user friendly
>to depend on 'attr' and get it installed with glusterfs-fuse. Please
>check the version you are using and file a bug for this:
>-
>https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&component=build
>
>Thanks,
>Niels
>
>
>>
>> David
>>
>>
>>
>>
>>
>> ------ Original Message ------
>> From: "Niels de Vos" <ndevos at redhat.com>
>> To: gluster-users at gluster.org; gluster-devel at gluster.org
>> Sent: 10/5/2014 8:44:59 AM
>> Subject: [Gluster-users] glusterfs-3.5.3beta1 has been released for
>>testing
>>
>> >GlusterFS 3.5.3 (beta1) has been released and is now available for
>> >testing. Get the tarball from here:
>> >-
>>http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.3beta1.tar.gz
>> >
>> >Packages for different distributions will land on the download
>>server
>> >over the next few days. When packages become available, the package
>> >maintainers will send a notification to this list.
>> >
>> >With this beta release, we make it possible for bug reporters and
>> >testers to check if issues have indeed been fixed. All community
>>members
>> >are invited to test and/or comment on this release.
>> >
>> >This release for the 3.5 stable series includes the following bug
>>fixes:
>> >- 1081016: glusterd needs xfsprogs and e2fsprogs packages
>> >- 1129527: DHT :- data loss - file is missing on renaming same file
>>from
>> >multiple client at same time
>> >- 1129541: [DHT:REBALANCE]: Rebalance failures are seen with error
>>message
>> >" remote operation failed: File exists"
>> >- 1132391: NFS interoperability problem: stripe-xlator removes EOF
>>at end
>> >of READDIR
>> >- 1133949: Minor typo in afr logging
>> >- 1136221: The memories are exhausted quickly when handle the
>>message
>> >which has multi fragments in a single record
>> >- 1136835: crash on fsync
>> >- 1138922: DHT + rebalance : rebalance process crashed + data loss +
>>few
>> >Directories are present on sub-volumes but not visible on mount
>>point +
>> >lookup is not healing directories
>> >- 1139103: DHT + Snapshot :- If snapshot is taken when Directory is
>> >created only on hashed sub-vol; On restoring that snapshot Directory
>>is
>> >not listed on mount point and lookup on parent is not healing
>> >- 1139170: DHT :- rm -rf is not removing stale link file and because
>>of
>> >that unable to create file having same name as stale link file
>> >- 1139245: vdsm invoked oom-killer during rebalance and Killed
>>process
>> >4305, UID 0, (glusterfs nfs process)
>> >- 1140338: rebalance is not resulting in the hash layout changes
>>being
>> >available to nfs client
>> >- 1140348: Renaming file while rebalance is in progress causes data
>>loss
>> >- 1140549: DHT: Rebalance process crash after add-brick and
>>`rebalance
>> >start' operation
>> >- 1140556: Core: client crash while doing rename operations on the
>>mount
>> >- 1141558: AFR : "gluster volume heal <volume_name> info" prints
>>some
>> >random characters
>> >- 1141733: data loss when rebalance + renames are in progress and
>>bricks
>> >from replica pairs goes down and comes back
>> >- 1142052: Very high memory usage during rebalance
>> >- 1142614: files with open fd's getting into split-brain when bricks
>>goes
>> >offline and comes back online
>> >- 1144315: core: all brick processes crash when quota is enabled
>> >- 1145000: Spec %post server does not wait for the old glusterd to
>>exit
>> >- 1147243: nfs: volume set help says the rmtab file is in
>> >"/var/lib/glusterd/rmtab"
>> >
>> >To get more information about the above bugs, go to
>> >https://bugzilla.redhat.com, enter the bug number in the search box
>>and
>> >press enter.
>> >
>> >If a bug from this list has not been sufficiently fixed, please open
>>the
>> >bug report, leave a comment with details of the testing and change
>>the
>> >status of the bug to ASSIGNED.
>> >
>> >In case someone has successfully verified a fix for a bug, please
>>change
>> >the status of the bug to VERIFIED.
>> >
>> >The release notes have been posted for review, and a blog post
>>contains
>> >an easier readable version:
>> >- http://review.gluster.org/8903
>> >-
>>
>> >http://blog.nixpanic.net/2014/10/glusterfs-353beta1-has-been-released.html
>> >
>> >Comments in bug reports, over email or on IRC (#gluster on Freenode)
>>are
>> >much appreciated.
>> >
>> >Thanks for testing,
>> >Niels
>> >
>> >_______________________________________________
>> >Gluster-users mailing list
>> >Gluster-users at gluster.org
>> >http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
More information about the Gluster-users
mailing list