[Gluster-devel] [Gluster-users] glusterfs-3.5.3beta1 has been released for testing
David F. Robinson
david.robinson at corvidtec.com
Mon Oct 6 17:05:29 UTC 2014
You are correct... Typo on my part. It happened when I installed
I'll file the bug report so that fuse installation is dependent on attr
being installed... Thanks...
------ Original Message ------
From: "Niels de Vos" <ndevos at redhat.com>
To: "David F. Robinson" <david.robinson at corvidtec.com>
Cc: gluster-users at gluster.org; gluster-devel at gluster.org
Sent: 10/6/2014 12:59:56 PM
Subject: Re: [Gluster-users] glusterfs-3.5.3beta1 has been released for
>On Mon, Oct 06, 2014 at 02:30:11PM +0000, David F. Robinson wrote:
>> When I installed the 3.5.3beta on my HPC cluster, I get the following
>> warnings during the mounts:
>> WARNING: getfattr not found, certain checks will be skipped..
>> I do not have attr installed on my compute nodes. Is this something
>> need in order for gluster to work properly or can this safely be
>These checks are done in the /sbin/mount.glusterfs shell script. One of
>the things it does, is preventing users from mounting a volume on a
>sub-directory of a brick.
>It is not required to have 'attr' installed on clients that mount a
>Gluster volume, but it surely is recommended for most users.
>This change introduces the warning:
> mount.glusterfs: getopts support and cleanup
>However, I do not think this is in glusterfs-3.5.3beta1. You probably
>have installed a glusterfs-3.6.0 beta on this particular client system.
>I think this is a bug in the packaging. It would be much user friendly
>to depend on 'attr' and get it installed with glusterfs-fuse. Please
>check the version you are using and file a bug for this:
>> ------ Original Message ------
>> From: "Niels de Vos" <ndevos at redhat.com>
>> To: gluster-users at gluster.org; gluster-devel at gluster.org
>> Sent: 10/5/2014 8:44:59 AM
>> Subject: [Gluster-users] glusterfs-3.5.3beta1 has been released for
>> >GlusterFS 3.5.3 (beta1) has been released and is now available for
>> >testing. Get the tarball from here:
>> >Packages for different distributions will land on the download
>> >over the next few days. When packages become available, the package
>> >maintainers will send a notification to this list.
>> >With this beta release, we make it possible for bug reporters and
>> >testers to check if issues have indeed been fixed. All community
>> >are invited to test and/or comment on this release.
>> >This release for the 3.5 stable series includes the following bug
>> >- 1081016: glusterd needs xfsprogs and e2fsprogs packages
>> >- 1129527: DHT :- data loss - file is missing on renaming same file
>> >multiple client at same time
>> >- 1129541: [DHT:REBALANCE]: Rebalance failures are seen with error
>> >" remote operation failed: File exists"
>> >- 1132391: NFS interoperability problem: stripe-xlator removes EOF
>> >of READDIR
>> >- 1133949: Minor typo in afr logging
>> >- 1136221: The memories are exhausted quickly when handle the
>> >which has multi fragments in a single record
>> >- 1136835: crash on fsync
>> >- 1138922: DHT + rebalance : rebalance process crashed + data loss +
>> >Directories are present on sub-volumes but not visible on mount
>> >lookup is not healing directories
>> >- 1139103: DHT + Snapshot :- If snapshot is taken when Directory is
>> >created only on hashed sub-vol; On restoring that snapshot Directory
>> >not listed on mount point and lookup on parent is not healing
>> >- 1139170: DHT :- rm -rf is not removing stale link file and because
>> >that unable to create file having same name as stale link file
>> >- 1139245: vdsm invoked oom-killer during rebalance and Killed
>> >4305, UID 0, (glusterfs nfs process)
>> >- 1140338: rebalance is not resulting in the hash layout changes
>> >available to nfs client
>> >- 1140348: Renaming file while rebalance is in progress causes data
>> >- 1140549: DHT: Rebalance process crash after add-brick and
>> >start' operation
>> >- 1140556: Core: client crash while doing rename operations on the
>> >- 1141558: AFR : "gluster volume heal <volume_name> info" prints
>> >random characters
>> >- 1141733: data loss when rebalance + renames are in progress and
>> >from replica pairs goes down and comes back
>> >- 1142052: Very high memory usage during rebalance
>> >- 1142614: files with open fd's getting into split-brain when bricks
>> >offline and comes back online
>> >- 1144315: core: all brick processes crash when quota is enabled
>> >- 1145000: Spec %post server does not wait for the old glusterd to
>> >- 1147243: nfs: volume set help says the rmtab file is in
>> >To get more information about the above bugs, go to
>> >https://bugzilla.redhat.com, enter the bug number in the search box
>> >press enter.
>> >If a bug from this list has not been sufficiently fixed, please open
>> >bug report, leave a comment with details of the testing and change
>> >status of the bug to ASSIGNED.
>> >In case someone has successfully verified a fix for a bug, please
>> >the status of the bug to VERIFIED.
>> >The release notes have been posted for review, and a blog post
>> >an easier readable version:
>> >- http://review.gluster.org/8903
>> >Comments in bug reports, over email or on IRC (#gluster on Freenode)
>> >much appreciated.
>> >Thanks for testing,
>> >Gluster-users mailing list
>> >Gluster-users at gluster.org
More information about the Gluster-devel