[Gluster-devel] [Gluster-users] glusterfs-3.5.1beta released

Krishnan Parthasarathi kparthas at redhat.com
Thu May 29 02:09:09 UTC 2014


Franco,

When your clients perceive a hang, could you check the status of the bricks by running,
# gluster volume status VOLNAME  (run this on one of the 'server' machines in the cluster.)

Could you also provide the statedump of the client(s),
by issuing the following command.

# kill -SIGUSR1 pid-of-mount-process (run this on the 'client' machine.)

This would dump the state information of the client, like the file operations in progress,
memory consumed etc, onto a file under $INSTALL_PREFIX/var/run/gluster. Please attach this
file in your response.

thanks,
Krish

----- Original Message -----
> Hi
> 
> My clients are running 3.4.1, when I try to mount from lots of machine
> simultaneously, some of the mounts hang. Stopping and starting the
> volume clears the hung mounts.
> 
> Errors in the client logs
> 
> [2014-05-28 01:47:15.930866] E
> [client-handshake.c:1741:client_query_portmap_cbk] 0-data2-client-3: failed
> to get the port number for remote subvolume. Please run 'gluster volume
> status' on server to see if brick process is running.
> 
> Let me know if you want more information.
> 
> Cheers,
> 
> On Sun, 2014-05-25 at 11:55 +0200, Niels de Vos wrote:
> > On Sat, 24 May, 2014 at 11:34:36PM -0700, Gluster Build System wrote:
> > > 
> > > SRC:
> > > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1beta.tar.gz
> > 
> > This beta release is intended to verify the changes that should resolve
> > the bugs listed below. We appreciate tests done by anyone. Please leave
> > a comment in the respective bugreport with a short description of the
> > success or failure. Visiting one of the bugreports is as easy as opening
> > the bugzilla.redhat.com/$BUG URL, for the first in the list, this
> > results in http://bugzilla.redhat.com/765202 .
> > 
> > Bugs expected to be fixed (31 in total since 3.5.0):
> > 
> >  #765202 - lgetxattr called with invalid keys on the bricks
> >  #833586 - inodelk hang from marker_rename_release_newp_lock
> >  #859581 - self-heal process can sometimes create directories instead of
> >  symlinks for the root gfid file in .glusterfs
> >  #986429 - Backupvolfile server option should work internal to GlusterFS
> >  framework
> > #1039544 - [FEAT] "gluster volume heal info" should list the entries that
> > actually required to be healed.
> > #1046624 - Unable to heal symbolic Links
> > #1046853 - AFR : For every file self-heal there are warning messages
> > reported in glustershd.log file
> > #1063190 - [RHEV-RHS] Volume was not accessible after server side quorum
> > was met
> > #1064096 - The old Python Translator code (not Glupy) should be removed
> > #1066996 - Using sanlock on a gluster mount with replica 3 (quorum-type
> > auto) leads to a split-brain
> > #1071191 - [3.5.1] Sporadic SIGBUS with mmap() on a sparse file created
> > with open(), seek(), write()
> > #1078061 - Need ability to heal mismatching user extended attributes
> > without any changelogs
> > #1078365 - New xlators are linked as versioned .so files, creating
> > <xlator>.so.0.0.0
> > #1086748 - Add documentation for the Feature: AFR CLI enhancements
> > #1086750 - Add documentation for the Feature: File Snapshots in GlusterFS
> > #1086752 - Add documentation for the Feature: On-Wire
> > Compression/Decompression
> > #1086756 - Add documentation for the Feature: zerofill API for GlusterFS
> > #1086758 - Add documentation for the Feature: Changelog based parallel
> > geo-replication
> > #1086760 - Add documentation for the Feature: Write Once Read Many (WORM)
> > volume
> > #1086762 - Add documentation for the Feature: BD Xlator - Block Device
> > translator
> > #1088848 - Spelling errors in rpc/rpc-transport/rdma/src/rdma.c
> > #1089054 - gf-error-codes.h is missing from source tarball
> > #1089470 - SMB: Crash on brick process during compile kernel.
> > #1089934 - list dir with more than N files results in Input/output error
> > #1091340 - Doc: Add glfs_fini known issue to release notes 3.5
> > #1091392 - glusterfs.spec.in: minor/nit changes to sync with Fedora spec
> > #1095775 - Add support in libgfapi to fetch volume info from glusterd.
> > #1095971 - Stopping/Starting a Gluster volume resets ownership
> > #1096040 - AFR : self-heal-daemon not clearing the change-logs of all the
> > sources after self-heal
> > #1096425 - i/o error when one user tries to access RHS volume over NFS with
> > 100+ GIDs
> > #1099878 - Need support for handle based Ops to fetch/modify extended
> > attributes of a file
> > 
> > 
> > Before a final glusterfs-3.5.1 release is made, we hope to have all the
> > blocker bugs fixed. There are currently 13 bugs marked that still need
> > some work done:
> > 
> > #1081016 - glusterd needs xfsprogs and e2fsprogs packages
> > #1086743 - Add documentation for the Feature: RDMA-connection manager
> > (RDMA-CM)
> > #1086749 - Add documentation for the Feature: Exposing Volume Capabilities
> > #1086751 - Add documentation for the Feature: gfid-access
> > #1086754 - Add documentation for the Feature: Quota Scalability
> > #1086755 - Add documentation for the Feature: readdir-ahead
> > #1086759 - Add documentation for the Feature: Improved block device
> > translator
> > #1086766 - Add documentation for the Feature: Libgfapi
> > #1086774 - Add documentation for the Feature: Access Control List - Version
> > 3 support for Gluster NFS
> > #1086781 - Add documentation for the Feature: Eager locking
> > #1086782 - Add documentation for the Feature: glusterfs and  oVirt
> > integration
> > #1086783 - Add documentation for the Feature: qemu 1.3 - libgfapi
> > integration
> > #1095595 - Stick to IANA standard while allocating brick ports
> > 
> > A more detailed overview of the status of each of these bugs is here:
> > - https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-3.5.1
> > 
> > Cheers,
> > Niels
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 


More information about the Gluster-devel mailing list