[Gluster-users] Error when mounting glusterfs on VM

Jiffin Tony Thottan jthottan at redhat.com
Thu Jul 16 07:11:50 UTC 2015



On 16/07/15 11:52, Kaamesh Kamalaaharan wrote:
> Hi everyone,
> Im trying to mount my volume on my VM but im encountering several problems
> 1) Using nfs mount to mount my gluster volume, i am able to mount 
> normally, but when i access executables stored on my gluster volume, 
> the process hangs and i have to cancel it. Once i cancel the process, 
> the mount point is no longer accessible and im unable to view any 
> files on my mount point

Can u please provide following details :

1.) can u try showmount -e <server_ip>  on client

at server

2.) gluster v  status

3) check the pid of gluster-nfs running on the brick(if there is)

4.) provide nfs.log and rpcinfo

Thanks,
Jiffin

>
> 2) using gluster-client (same version as server 3.6.2 ) to mount my 
> gluster volume , im getting the following messages in the gluster log.
>
>         1: volume gfsvolume-client-0
>           2:     type protocol/client
>           3:     option ping-timeout 30
>           4:     option remote-host gfs1
>           5:     option remote-subvolume /export/sda/brick
>           6:     option transport-type socket
>           7:     option frame-timeout 90
>           8:     option send-gids true
>           9: end-volume
>          10:
>          11: volume gfsvolume-client-1
>          12:     type protocol/client
>          13:     option ping-timeout 30
>          14:     option remote-host gfs2
>          15:     option remote-subvolume /export/sda/brick
>          16:     option transport-type socket
>          17:     option frame-timeout 90
>          18:     option send-gids true
>          19: end-volume
>          20:
>          21: volume gfsvolume-replicate-0
>          22:     type cluster/replicate
>          23:     option data-self-heal-algorithm diff
>          24:     option quorum-type fixed
>          25:     option quorum-count 1
>          26:     subvolumes gfsvolume-client-0 gfsvolume-client-1
>          27: end-volume
>          28:
>          29: volume gfsvolume-dht
>          30:     type cluster/distribute
>          31:     subvolumes gfsvolume-replicate-0
>          32: end-volume
>          33:
>          34: volume gfsvolume-write-behind
>          35:     type performance/write-behind
>          36:     option cache-size 4MB
>          37:     subvolumes gfsvolume-dht
>          38: end-volume
>          39:
>          40: volume gfsvolume-read-ahead
>          41:     type performance/read-ahead
>          42:     subvolumes gfsvolume-write-behind
>          43: end-volume
>          44:
>          45: volume gfsvolume-io-cache
>          46:     type performance/io-cache
>          47:     option max-file-size 2MB
>          48:     option cache-timeout 60
>          49:     option cache-size 6442450944
>          50:     subvolumes gfsvolume-read-ahead
>          51: end-volume
>          52:
>          53: volume gfsvolume-open-behind
>          54:     type performance/open-behind
>          55:     subvolumes gfsvolume-io-cache
>          56: end-volume
>          57:
>          58: volume gfsvolume-md-cache
>          59:     type performance/md-cache
>          60:     subvolumes gfsvolume-open-behind
>          61: end-volume
>          62:
>          63: volume gfsvolume
>         [2015-07-16 03:31:27.231127] E [MSGID: 108006]
>         [afr-common.c:3591:afr_notify] 0-gfsvolume-replicate-0: All
>         subvolumes are down. Going offline until atleast one of them
>         comes back up.
>         [2015-07-16 03:31:27.232201] W [MSGID: 108001]
>         [afr-common.c:3635:afr_notify] 0-gfsvolume-replicate-0:
>         Client-quorum is not met
>          64:     type debug/io-stats
>          65:     option latency-measurement on
>          66:     option count-fop-hits on
>          67:     subvolumes gfsvolume-md-cache
>          68: end-volume
>          69:
>          70: volume meta-autoload
>          71:     type meta
>          72:     subvolumes gfsvolume
>          73: end-volume
>          74:
>         +------------------------------------------------------------------------------+
>         [2015-07-16 03:31:27.254509] I
>         [fuse-bridge.c:5080:fuse_graph_setup] 0-fuse: switched to graph 0
>         [2015-07-16 03:31:27.255262] I [fuse-bridge.c:4009:fuse_init]
>         0-glusterfs-fuse: FUSE inited with protocol versions:
>         glusterfs 7.22 kernel 7.17
>         [2015-07-16 03:31:27.256340] I
>         [afr-common.c:3722:afr_local_init] 0-gfsvolume-replicate-0: no
>         subvolumes up
>         [2015-07-16 03:31:27.256722] I
>         [afr-common.c:3722:afr_local_init] 0-gfsvolume-replicate-0: no
>         subvolumes up
>         [2015-07-16 03:31:27.256840] W
>         [fuse-bridge.c:779:fuse_attr_cbk] 0-glusterfs-fuse: 2:
>         LOOKUP() / => -1 (Transport endpoint is not connected)
>         [2015-07-16 03:31:27.284927] I
>         [fuse-bridge.c:4921:fuse_thread_proc] 0-fuse: unmounting /export
>         [2015-07-16 03:31:27.285919] W
>         [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum
>         (15), shutting down
>         [2015-07-16 03:31:27.286052] I [fuse-bridge.c:5599:fini]
>         0-fuse: Unmounting '/export'.
>
>
>
> I have deployed gluster successfully on my physical machines and my 
> network is fine. I used the exact same steps to install the gluster 
> client on my VM and my physical machines.  Some info you may find 
> helpful in narrowing down the error is as below:
>
> -My physical machines are fine and show no error messages in log.
> -I am able to ssh into the machine so it is not a network problem.
> -there are no firewall rules on the gluster server and client .
> -I have done modprobe fuse successfully.
> - NFS is installed on all machines.
> - gluster versions are same on all machines
> - gluster volume status shows all processes are online
> - gluster status client does not show my client machine
> -server can ping to client and client can ping the server
> -i tried restarting rpcbind but it didnt fix my problem
>
> If there is anything else you would need to help me fix this , please 
> do let me know and ill pass the info on to you .
>
> Eid Mubarak to all those who are celebrating and thank you in advance 
> for your assistance.
>
> Thanks,
> Kaamesh
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150716/70e6ea66/attachment.html>


More information about the Gluster-users mailing list