[Gluster-users] gluster under the hood
Ilan Schwarts
ilan84 at gmail.com
Wed Aug 9 11:02:38 UTC 2017
Hi,
I am using glusterfs 3.10.3 on my CentOS 7.3 Kernel 3.10.0-514.
I have 2 machines as server nodes on my volume and 1 client machine
CentOS 7.2 with the same kernel.
>From Client:
[root at CentOS7286-64 ~]# rpm -qa *gluster*
glusterfs-api-3.7.9-12.el7.centos.x86_64
glusterfs-libs-3.7.9-12.el7.centos.x86_64
glusterfs-fuse-3.7.9-12.el7.centos.x86_64
glusterfs-client-xlators-3.7.9-12.el7.centos.x86_64
glusterfs-3.7.9-12.el7.centos.x86_64
>From Node1 Server:
Status of volume: volume1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick L137B-GlusterFS-Node1.L137B-root.com:
/gluster/volume1 49152 0 Y 28370
Brick L137B-GlusterFS-Node2.L137B-root.com:
/gluster/volume1 49152 0 Y 16123
Self-heal Daemon on localhost N/A N/A Y 30618
Self-heal Daemon on L137B-GlusterFS-Node2.L
137B-root.com N/A N/A Y 17987
Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks
In the documentation, they say to mount the glusterFS using the command:
mount -t glusterfs serverNode:share /local/directory
What is going on under the hood when calling this function ? what NFS
is being used ? NFS Kernel ? Ganesha NFS ?
The option: "Volume1.options.nfs.disable: on"
Indicates if gluster exported via kernel-NFS or ganesha NFS ?
When Volume1.options.nfs.disable: off - I *can* use showmount -e Node1
from the client machine, When i set: Volume1.options.nfs.disable: on,
i *can not* longer use "showmount -e .."
When I mount using command:
mount -t glusterfs L137B-GlusterFS-Node2.L137B-root.com:/volume1 /mnt/glusterfs
>From Client machine, the command is stucked and not respnding.
>From Node1 the command is success.
All machines are on the same domain, i disabled the firewall, What am
I missing ?
More information about the Gluster-users
mailing list