[Gluster-users] Unable to mount gfs gv0 volume Enterprise Linux Enterprise Linux Server release 5.6 (Carthage)
Jeffrey Brewster
jab2805 at yahoo.com
Thu Jan 16 20:19:53 UTC 2014
Thanks! I will try on 6.4.
On Thursday, January 16, 2014 3:02 PM, Lalatendu Mohanty <lmohanty at redhat.com> wrote:
On 01/16/2014 08:29 PM, Jeffrey Brewster wrote:
>
>Please find the packages I have installed. I have been using the quick start doc so I have been trying to mount locally. Do I need a 3.4.2 client rpm? I have not seen one.
>
>
>
>
>
Jeffery,
If you are trying to mount the volume locally on the gluster-node,
you dont need any extra client packages.
I believe you are usingbelow link for the quick start guide. In the guide, the recommended distribution for these steps to try is Fedora20. I have tried the same steps on RHEL6.4 , it worked fine. My guess is there might be a bug with with EL5. I don't have a EL5 handy , hence couldn't test it.
http://www.gluster.org/community/documentation/index.php/QuickStart
-Lala
host1:
>
>
>
>gluster packages installed:
>
>
>[root at gcvs0139 ~]# rpm -qa | grep gluster | cat -n
> 1 glusterfs-libs-3.4.2-1.el5
> 2 glusterfs-server-3.4.2-1.el5
> 3 glusterfs-3.4.2-1.el5
> 4 glusterfs-cli-3.4.2-1.el5
> 5 glusterfs-geo-replication-3.4.2-1.el5
> 6 glusterfs-fuse-3.4.2-1.el5
>
>
>
>selinux disabled:
>
>
>[root at gcvs0139 ~]# getenforce
>Disabled
>[root at gcvs0139 ~]#
>
>
>
>
>
>
>
>
>
>
>
>
>
>host 2: SAME
>
>
>
>[root at gcvs4056 glusterfs]# rpm -qa | grep gluster | cat -n
> 1 glusterfs-libs-3.4.2-1.el5
> 2 glusterfs-3.4.2-1.el5
> 3 glusterfs-cli-3.4.2-1.el5
> 4 glusterfs-geo-replication-3.4.2-1.el5
> 5 glusterfs-fuse-3.4.2-1.el5
> 6 glusterfs-server-3.4.2-1.el5
>
>
>
>
>
>[root at gcvs4056 glusterfs]# getenforce
>Disabled
>[root at gcvs4056 glusterfs]#
>
>
>
>
>
>
>On Thursday, January 16, 2014 12:37 AM, Lalatendu Mohanty <lmohanty at redhat.com> wrote:
>
>On 01/16/2014 03:58 AM, Jeffrey Brewster wrote:
>
>
>>
>>I'm not sure why the mount is failing. I followed the quick start guide...
>>
>>
>>
>>
>>
>>Data:
>>--------------
>>
>>
>>
>>
>>
>>
>>1. Info check looks good
>>
>>gluster volume info Volume Name: gv2
Type: Replicate
Volume ID: ca9f2409-3004-4287-af6f-1b455048710e
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gcvs0139:/data/gv0/brick1/app
Brick2: gcvs4056:/data/gv0/brick2/app1 2. Status looks good gluster volume status Status of volume: gv2
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick gcvs0139:/data/gv0/brick1/app 49152 Y 7648
Brick gcvs4056:/data/gv0/brick2/app1 49152 Y 12005
NFS Server on localhost 2049 Y 12017
Self-heal Daemon on localhost N/A Y 12021
NFS Server on gcvs0139 2049 Y 7660
Self-heal Daemon on gcvs0139 N/A Y 7664 There are no active volume tasks 3. peer check looks good [root at gcvs4056 /]# gluster peer probe gcvs0139
peer probe: success: host gcvs0139 port 24007 already in peer list
[root at gcvs4056 /]# 4. mount fails [root at gcvs4056 /]# mount -t glusterfs gcvs4056:/gv2 /mnt
Mount failed. Please check the log file for more details.
[root at gcvs4056 /]# 5. mount log From the mnt.log:
------------- [2014-01-15 22:19:57.751543] I [afr-common.c:3698:afr_notify] 0-gv2-replicate-0: Subvolume 'gv2-client-1' came back up; going online.
[2014-01-15 22:19:57.751614] I [rpc-clnt.c:1676:rpc_clnt_reconfig] 0-gv2-client-0: changing port to 49152 (from 0)
[2014-01-15 22:19:57.751675] I [client-handshake.c:450:client_set_lk_version_cbk] 0-gv2-client-1: Server lk version = 1
[2014-01-15 22:19:57.751712] W [socket.c:514:__socket_rwv] 0-gv2-client-0: readv failed (No data available)
[2014-01-15 22:19:57.759041] W [common-utils.c:2247:gf_get_reserved_ports] 0-glusterfs: could not open the file /proc/sys/net/ipv4/ip_local_reserved_ports for getting reserved ports info (No such file or directory)
[2014-01-15 22:19:57.759080] W [common-utils.c:2280:gf_process_reserved_ports] 0-glusterfs: Not able to get reserved ports, hence there is a possibility that glusterfs may consume reserved port
[2014-01-15 22:19:57.762259] I [client-handshake.c:1659:select_server_supported_programs] 0-gv2-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2014-01-15 22:19:57.762974] I [client-handshake.c:1456:client_setvolume_cbk] 0-gv2-client-0: Connected to 10.131.83.139:49152, attached to remote volume '/data/gv0/brick1/app'.
[2014-01-15 22:19:57.763008] I [client-handshake.c:1468:client_setvolume_cbk] 0-gv2-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2014-01-15 22:19:57.775406] I [fuse-bridge.c:4769:fuse_graph_setup] 0-fuse: switched to graph 0
[2014-01-15 22:19:57.775695] I [client-handshake.c:450:client_set_lk_version_cbk] 0-gv2-client-0: Server lk version = 1
[2014-01-15 22:19:57.779538] I [fuse-bridge.c:4628:fuse_thread_proc] 0-fuse: unmounting /mnt
[2014-01-15 22:19:57.780102] W [glusterfsd.c:1002:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x31f6ad40cd] (-->/lib64/libpthread.so.0 [0x31f7e0673d] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0x138) [0x405328]))) 0-: received signum (15), shutting down
[2014-01-15 22:19:57.780206] I [fuse-bridge.c:5260:fini] 0-fuse: Unmounting '/mnt'.
>>
>>
>>
>>On Tuesday, January 14, 2014 5:12 PM, Jeffrey Brewster <jab2805 at yahoo.com> wrote:
>>
>>
>>
>>Hi Ben,
>>
>>1. Port 24007 is open
and all iptable rules
have been flushed:
>>
>>[root at gcvs4056 run]#
telnet gcvs0139 24007
>>Trying 10.131.83.139...
>>Connected to gcvs0139.
>>Escape character is
'^]'.
>>
>>
>>2. gluster peer status
looks good from both
boxes:
>>
>>box1
>>[root at gcvs4056 run]#
gluster peer status
>>Number of Peers: 1
>>
>>Hostname: gcvs0139
>>Uuid:
d40ba14d-cbb4-40e7-86a2-62afaa99af4d
>>State: Peer in Cluster
(Connected)
>>
>>box 2:
>>
>>
>># gluster peer status
>>Number of Peers: 1
>>
>>Hostname: gcvs4056
>>Port: 24007
>>Uuid:
b1aae40a-78be-4303-bf48-49fb41d6bb30
>>State: Peer in Cluster
(Connected)
>>
>>3. selinux is disabled
on both boxes.
>>
>> grep dis
/etc/sysconfig/selinux
>># disabled - No
SELinux policy is
loaded.
>>SELINUX=disabled
>>
>>
>>
>I can see that selinux is disabled in the config
file, but this does not come in to effect unless you
reboot the server. Check the current status of
selinux i.e. run "getenforce" .
>
>Also what gluster packages are installed on the
client side?
>
>
>
>>Thanks for your help!
>>
>>
>>
>>
>>
>>On Tuesday, January 14, 2014 4:54 PM, Ben Turner <bturner at redhat.com> wrote:
>>
>>----- Original Message -----
>>> From:
"Jeffrey
Brewster" <jab2805 at yahoo.com>
>>> To: "Ben
Turner" <bturner at redhat.com>
>>> Cc: gluster-users at gluster.org
>>> Sent:
Tuesday, January
14, 2014 4:35:30
PM
>>> Subject:
Re:
[Gluster-users]
Unable to mount
gfs gv0 volume
Enterprise
Linux
Enterprise Linux
Server release
5.6
>>> (Carthage)
>>>
>>> Hi Ben,
>>>
>>>
>>>
>>>
>>> I don't
have any "E"
(error I assume)
lines in the
mnt.log file. I
check all
>>> the log
files in the
/var/log/glusterfs/
dir. I restarted
glusterd to see
>>> if I could
see any errors.
>>>
>>
>>Make sure
SELinux is
disabled and
your firewall is
open to allow
gluster
traffic. Have a
look at:
>>
>>http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troubleshooting
>>
>>For what ports
you need open.
As a test I
would just try
disabling
iptables and
adding in the
rules after you
confirm it is
working.
>>
>>-b
>>
>>
>>>
>>>
>>>
>>>
>>> Data:
>>>
>>>
>>>
>>> Warnings
from mount
log:
>>>
-------------
>>>
>>> # grep W
mnt.log | cat
-n
>>>
>>>
>>> 1
[2014-01-14
>>>
19:32:22.920069]
W
[common-utils.c:2247:gf_get_reserved_ports]
>>>
0-glusterfs:
could not open
the file
>>>
/proc/sys/net/ipv4/ip_local_reserved_ports
for getting
reserv
>>>
>>> 2
[2014-01-14
19:32:22.920108]
W
>>>
[common-utils.c:2280:gf_process_reserved_ports]
0-glusterfs:
Not able to
>>> get
reserved
ports, hence
there is a
possibility
that glusterfs
may c
>>>
>>> 3
[2014-01-14
19:32:22.935611]
W
>>>
[common-utils.c:2247:gf_get_reserved_ports]
0-glusterfs:
could not open
>>> the file
/proc/sys/net/ipv4/ip_local_reserved_ports
for getting
reserv
>>>
>>> 4
[2014-01-14
19:32:22.935646]
W
>>>
[common-utils.c:2280:gf_process_reserved_ports]
0-glusterfs:
Not able to
>>> get
reserved
ports, hence
there is a
possibility
that glusterfs
may c
>>>
>>> 5
[2014-01-14
19:32:22.938783]
W
>>>
[common-utils.c:2247:gf_get_reserved_ports]
0-glusterfs:
could not open
>>> the file
/proc/sys/net/ipv4/ip_local_reserved_ports
for getting
reserv
>>>
>>> 6
[2014-01-14
19:32:22.938826]
W
>>>
[common-utils.c:2280:gf_process_reserved_ports]
0-glusterfs:
Not able to
>>> get
reserved
ports, hence
there is a
possibility
that glusterfs
may c
>>> 7
[2014-01-14
19:32:22.941076]
W
[socket.c:514:__socket_rwv]
>>>
0-gv0-client-1:
readv failed
(No data
available)
>>>
>>> 8
[2014-01-14
19:32:22.945278]
W
>>>
[common-utils.c:2247:gf_get_reserved_ports]
0-glusterfs:
could not open
>>> the file
/proc/sys/net/ipv4/ip_local_reserved_ports
for getting
reserv
>>>
>>> 9
[2014-01-14
19:32:22.945312]
W
>>>
[common-utils.c:2280:gf_process_reserved_ports]
0-glusterfs:
Not able to
>>> get
reserved
ports, hence
there is a
possibility
that glusterfs
may c
>>> 10
[2014-01-14
>>>
19:32:22.946921]
W
[socket.c:514:__socket_rwv]
0-gv0-client-0:
readv failed
>>> (No data
available)
>>>
>>> 11
[2014-01-14
19:32:22.953383]
W
>>>
[common-utils.c:2247:gf_get_reserved_ports]
0-glusterfs:
could not open
>>> the file
/proc/sys/net/ipv4/ip_local_reserved_ports
for getting
reserv
>>>
>>> 12
[2014-01-14
19:32:22.953423]
W
>>>
[common-utils.c:2280:gf_process_reserved_ports]
0-glusterfs:
Not able to
>>> get
reserved
ports, hence
there is a
possibility
that glusterfs
may c
>>>
>>> 13
[2014-01-14
19:32:22.976633]
W
[glusterfsd.c:1002:cleanup_and_exit]
>>>
(-->/lib64/libc.so.6(clone+0x6d)
[0x31f6ad40cd]
>>>
(-->/lib64/libpthread.so.0
[0x
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> After
restarting
glusterd:
>>>
-----------------------------
>>>
>>>
>>> # grep E
* | grep
21:25| cat -n
>>>
>>>
>>> 1
etc-glusterfs-glusterd.vol.log:[2014-01-14
21:25:47.637082]
E
>>>
[rpc-transport.c:253:rpc_transport_load]
0-rpc-transport:
>>>
/usr/lib64/glusterfs/3.4.2/rpc-transport/rdma.so:
cannot open
shared
>>>
object file:
No such file
or directory
>>> 2
etc-glusterfs-glusterd.vol.log:[2014-01-14
21:25:49.940650]
E
>>>
[glusterd-store.c:1858:glusterd_store_retrieve_volume]
0-: Unknown
key:
>>>
brick-0
>>> 3
etc-glusterfs-glusterd.vol.log:[2014-01-14
21:25:49.940698]
E
>>>
[glusterd-store.c:1858:glusterd_store_retrieve_volume]
0-: Unknown
key:
>>>
brick-1
>>> 4
etc-glusterfs-glusterd.vol.log:[2014-01-14
21:25:52.075563]
E
>>>
[glusterd-utils.c:3801:glusterd_nodesvc_unlink_socket_file]
>>>
0-management:
Failed to
remove
>>>
/var/run/3096dde11d292c28c8c2f97101c272e8.socket
error:
Resource
>>>
temporarily
unavailable
>>> 5
etc-glusterfs-glusterd.vol.log:[2014-01-14
21:25:53.084722]
E
>>>
[glusterd-utils.c:3801:glusterd_nodesvc_unlink_socket_file]
>>>
0-management:
Failed to
remove
>>>
/var/run/15f2dcd004edbff6ab31364853d6b6b0.socket
error: No such
file or
>>>
directory
>>> 6
glustershd.log:[2014-01-14
21:25:42.392401]
W
>>>
[socket.c:1962:__socket_proto_state_machine]
0-glusterfs:
reading from
>>>
socket failed.
Error (No data
available),
peer
(127.0.0.1:24007)
>>> 7
glustershd.log:[2014-01-14
21:25:53.476026]
E
>>>
[afr-self-heald.c:1067:afr_find_child_position]
0-gv0-replicate-0:
>>>
getxattr
failed on
gv0-client-0 -
(Transport
endpoint is
not connected)
>>> 8
nfs.log:[2014-01-14
21:25:42.391560]
W
>>>
[socket.c:1962:__socket_proto_state_machine]
0-glusterfs:
reading from
>>>
socket failed.
Error (No data
available),
peer
(127.0.0.1:24007)
>>>
>>>
>>>
>>>
>>> Procs
After restrt:
>>>
>>>
>>> ps -ef |
grep gluster
>>> root
6345 1 0
18:35 ?
00:00:00
/usr/sbin/glusterfsd
-s
>>> gcvs4056
--volfile-id
gv0.gcvs4056.data-gv0-brick1-app
-p
>>>
/var/lib/glusterd/vols/gv0/run/gcvs4056-data-gv0-brick1-app.pid
-S
>>>
/var/run/f2339d9fa145fd28662d8b970fbd4aab.socket
--brick-name
>>>
/data/gv0/brick1/app
-l
/var/log/glusterfs/bricks/data-gv0-brick1-app.log
>>>
--xlator-option
*-posix.glusterd-uuid=b1aae40a-78be-4303-bf48-49fb41d6bb30
>>>
--brick-port
49153
--xlator-option
gv0-server.listen-port=49153
>>> root
7240 1 0
21:25 ?
00:00:00
/usr/sbin/glusterd
>>>
--pid-file=/var/run/glusterd.pid
>>> root
7266 1 0
21:25 ?
00:00:00
/usr/sbin/glusterfs
-s
>>> localhost
--volfile-id
gluster/nfs -p
/var/lib/glusterd/nfs/run/nfs.pid
-l
>>>
/var/log/glusterfs/nfs.log
-S
>>>
/var/run/3096dde11d292c28c8c2f97101c272e8.socket
>>> root
7273 1 0
21:25 ?
00:00:00
/usr/sbin/glusterfs
-s
>>> localhost
--volfile-id
gluster/glustershd
-p
>>>
/var/lib/glusterd/glustershd/run/glustershd.pid
-l
>>>
/var/log/glusterfs/glustershd.log
-S
>>>
/var/run/15f2dcd004edbff6ab31364853d6b6b0.socket
--xlator-option
>>>
*replicate*.node-uuid=b1aae40a-78be-4303-bf48-49fb41d6bb30
>>> root
7331 5375 0
21:34 pts/1
00:00:00 grep
gluster
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On
Tuesday,
January 14,
2014 4:08 PM,
Ben Turner
<bturner at redhat.com> wrote:
>>>
>>> -----
Original
Message -----
>>> >
From: "Jeffrey
Brewster" <jab2805 at yahoo.com>
>>> > To:
"Ben Turner"
<bturner at redhat.com>
>>> > Cc: gluster-users at gluster.org
>>> >
Sent: Tuesday,
January 14,
2014 3:57:24
PM
>>> >
Subject: Re:
[Gluster-users]
Unable to
mount gfs gv0
volume
Enterprise
>>> >
Linux
Enterprise
Linux Server
release 5.6
>>> >
(Carthage)
>>> >
>>> >
Thanks Ben,
>>> >
>>> >
>>> >
>>> > I
tried that it
still failed.
>>>
>>> As Vijay
suggested have
a look at
/var/log/glusterfs,
there should
be a log
>>> there
with the
mountpoint
name that
should give us
a clue as to
what is
>>> going
on. To note
if there is a
problem with
FUSE not being
loaded you
will
>>> see
something
like:
>>>
>>>
[2013-01-12
01:58:22.213417]
I
[glusterfsd.c:1759:main]
>>>
0-/usr/sbin/glusterfs:
Started
running
/usr/sbin/glusterfs
version
>>>
3.3.0.5rhs
>>>
[2013-01-12
01:58:22.213831]
E
[mount.c:596:gf_fuse_mount]
0-glusterfs-fuse:
>>> cannot
open /dev/fuse
(No such file
or directory)
>>>
[2013-01-12
01:58:22.213856]
E
[xlator.c:385:xlator_init]
0-fuse:
>>>
Initialization
of volume
'fuse' failed,
review your
volfile again
>>>
>>> If you
can't tell the
problem from
the log shoot
out the
relevant line
and
>>> I'll have
a look.
>>>
>>> -b
>>>
>>>
>>> >
>>> >
>>> >
>>> > On
Tuesday,
January 14,
2014 3:22 PM,
Ben Turner
<bturner at redhat.com>
>>> >
wrote:
>>> >
>>> >
----- Original
Message -----
>>> > >
From: "Jeffrey
Brewster" <jab2805 at yahoo.com>
>>> > >
To: gluster-users at gluster.org
>>> > >
Sent: Tuesday,
January 14,
2014 1:47:55
PM
>>> > >
Subject:
[Gluster-users]
Unable to
mount gfs gv0
volume
Enterprise
Linux
>>> > >
Enterprise
Linux Server
release 5.6
>>> > >
(Carthage)
>>> > >
>>> > >
>>> > >
>>> > >
Hi all,
>>> > >
>>> > >
I have been
following the
quick start
guide as part
of a POC. I
created a
>>> > >
10GB brick to
be mounted.
I'm unable to
mount the
volume. I
don't see any
>>> > >
thing in the
logs. has
anyone had the
same issues? I
was thinking I
need
>>> > >
to
>>> > >
install
gluster-client
but I don't
see in the
latest release
rpms.
>>> > >
>>> > >
Data:
>>> > >
===========
>>> > >
>>> > >
OS Version:
>>> > >
------------
>>> > >
>>> > >
Description:
Enterprise
Linux
Enterprise
Linux Server
release 5.6
>>> > >
(Carthage
>>> > >
>>> > >
>>> > >
Installed
packages on
both servers
>>> > >
------------
>>> > >
>>> > >
# rpm -qa |
grep gluster |
cat -n
>>> > >
1
glusterfs-libs-3.4.2-1.el5
>>> > >
2
glusterfs-3.4.2-1.el5
>>> > >
3
glusterfs-cli-3.4.2-1.el5
>>> > >
4
glusterfs-geo-replication-3.4.2-1.el5
>>> > >
5
glusterfs-fuse-3.4.2-1.el5
>>> > >
6
glusterfs-server-3.4.2-1.el5
>>> > >
>>> > >
>>> > >
gluster peer
probe
successful:
>>> > >
-----------
>>> > >
peer probe:
success: host
gcvs0139 port
24007 already
in peer list
>>> > >
>>> > >
Gluster info:
>>> > >
---------
>>> > >
gluster volume
info | cat -n
>>> > >
1
>>> > >
2 Volume Name:
gv0
>>> > >
3 Type:
Replicate
>>> > >
4 Volume ID:
30a27041-ba1b-456f-b0bc-d8cdd2376c2f
>>> > >
5 Status:
Started
>>> > >
6 Number of
Bricks: 1 x 2
= 2
>>> > >
7
Transport-type:
tcp
>>> > >
8 Bricks:
>>> > >
9 Brick1:
gcvs0139:/data/gv0/brick1/app
>>> > >
10 Brick2:
gcvs4056:/data/gv0/brick1/app
>>> > >
>>> > >
>>> > >
Mount Failure:
>>> > >
----------
>>> > >
>>> > >
>>> > >
[root at gcvs4056 jbrewster]# mount -t glusterfs gcvs4056:/gv0 /mnt
>>> > >
Mount failed.
Please check
the log file
for more
details.
>>> > >
>>> >
>>> > I
bet you need
to modprobe
the fuse
module, in el5
its not loaded
by
>>> >
default.
>>> >
>>> >
>>> > -b
>>> >
>>> > >
>>> > >
>>> > >
_______________________________________________
>>> > >
Gluster-users
mailing list
>>> > > Gluster-users at gluster.org
>>> > > http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>_______________________________________________
Gluster-users mailing list Gluster-users at gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140116/0d4468e3/attachment.html>
More information about the Gluster-users
mailing list