[Gluster-users] unable to mount glusterfs via FUSE

Dung Le vic_le at icloud.com
Sun Mar 12 08:00:29 UTC 2017


Hi community,

I have a few issues with my GlusterFS.  Below is my gluster configuration:

Configuration:
3 x storage nodes
Replica of 3
zfs for the bricks
Pacemaker and corosync 

 node1:/root=> gluster --version
glusterfs 3.7.11 built on Apr 27 2016 14:09:22
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.

node1:/root => gluster vol info vol1
Volume Name: vol1
Type: Replicate
Volume ID: 3a74d652-69cb-449c-a8a9-88d790a4d4c1
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node1:/zfsvol/brick01/vol1
Brick2: node2:/zfsvol/brick01/vol1
Brick3: node3:/zfsvol/brick01/vol1
Options Reconfigured:
storage.build-pgfid: on
nfs.export-volumes: on
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
nfs.disable: off
nfs.acl: off
nfs.rpc-auth-allow: 127.0.0.1,17.158.0.0/15
nfs.addr-namelookup: off
nfs.drc: off

Issue:
1. The client keep getting the following error messages:
nfs: server nfs001 not responding, still trying
nfs: server nfs001 not responding, still trying
nfs: server nfs001 not responding, still trying
nfs: server nfs001 not responding, still trying
nfs: server nfs001 not responding, still trying
nfs: server nfs001 OK
nfs: server nfs001 OK
nfs: server nfs001 OK
nfs: server nfs001 OK
nfs: server nfs001 OK
2. vi on file taking up to 3 minutes to view or hang sometime.
3. Can mount native Glusterfs NFS but could not mount via FUSE. df getting hang after mounting FUSE.
4. The used size of the brick on node2 show different than the other node (node 1 & node3) after it gets rebooted. The used capacity on node2 is showing a lot less than node1 & node3.

node1:/root => df -h
Filesystem            Size  Used Avail Use% Mounted on
zfsvol/brick01        5.1T  2.3T  2.9T  44% /zfsvol/brick01
localhost:/vol1    5.2T  2.4T  2.8T  46% /vol1

node2:/root => df -h  (this node got rebooted)
Filesystem            Size  Used Avail Use% Mounted on
zfsvol/brick01        4.0T  686G  3.3T  17% /zfsvol/brick01
localhost:/gvol001    5.2T  2.4T  2.8T  46% /vol1

node3:/root => df -h
Filesystem            Size  Used Avail Use% Mounted on
zfsvol/brick01        5.2T  2.4T  2.8T  46% /zfsvol/brick01
localhost:/vol1    5.2T  2.4T  2.8T  46% /vol1

Any idea?

Thanks,
~ Vic Le

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170312/3f5ad374/attachment.html>


More information about the Gluster-users mailing list