[Gluster-users] [Gluster-devel] 3.7.5 upgrade issues
JuanFra Rodríguez Cardoso
jfrodriguez at keedio.com
Fri Oct 23 15:32:48 UTC 2015
I had that problem too, but I'm not able to fix it. I was forced to
downgrade to 3.7.4 to continue running my gluster volumes.
The upgrading process (3.7.4 -> 3.7.5) does not seem fully reliable.
Best.
.....................................................................
Juan Francisco Rodríguez Cardoso
jfrodriguez at keedio.com | +34 636 69 26 91
www.keedio.com
.....................................................................
On 16 October 2015 at 15:24, David Robinson <david.robinson at corvidtec.com>
wrote:
> That log was the frick one, which is the node that I upgraded. The frack
> one is attached. One thing I did notice was the errors below in the etc
> log file. The /usr/lib64/glusterfs/3.7.5 directory doesn't exist yet on
> frack.
>
>
> +------------------------------------------------------------------------------+
> [2015-10-16 12:04:06.235993] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 2
> [2015-10-16 12:04:06.236036] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> [2015-10-16 12:04:06.236099] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 2
> [2015-10-16 12:04:09.242413] E [socket.c:2278:socket_connect_finish]
> 0-management: connection to 10.200.82.1:24007 failed (No route to host)
> [2015-10-16 12:04:09.242504] I [MSGID: 106004]
> [glusterd-handler.c:5056:__glusterd_peer_rpc_notify] 0-management: Peer <
> frackib01.corvidtec.com> (<8ab9a966-d536-4bd1-828a-64b2d72c47ca>), in
> state <Peer in Cluster>, has disconnected from glusterd.
> [2015-10-16 12:04:09.726895] W [socket.c:869:__socket_keepalive] 0-socket:
> failed to set TCP_USER_TIMEOUT -1000 on socket 14, Invalid argument
> [2015-10-16 12:04:09.726918] E [socket.c:2965:socket_connect]
> 0-management: Failed to set keep-alive: Invalid argument
> [2015-10-16 12:04:09.902756] W [MSGID: 101095]
> [xlator.c:143:xlator_volopt_dynload] 0-xlator:
> */usr/lib64/glusterfs/3.7.5/xlator/rpc-transport/socket.so:* cannot open
> shared object file: No such file or directory
>
>
> ------ Original Message ------
> From: "Mohammed Rafi K C" <rkavunga at redhat.com>
> To: "David Robinson" <drobinson at corvidtec.com>; "gluster-users at gluster.org"
> <gluster-users at gluster.org>; "Gluster Devel" <gluster-devel at gluster.org>
> Sent: 10/16/2015 8:43:21 AM
> Subject: Re: [Gluster-devel] 3.7.5 upgrade issues
>
>
> Hi David,
>
> The logs you attached, are they from node "frackib01.corvidtec.com", if
> not can you attach logs from the node "frackib01.corvidtec.com" ?
>
> Regards
> Rafi KC
> On 10/16/2015 05:46 PM, David Robinson wrote:
>
> I have a replica pair setup that I was trying to upgrade from 3.7.4 to
> 3.7.5.
> After upgrading the rpm packages (rpm -Uvh *.rpm) and rebooting one of the
> nodes, I am now receiving the following:
>
> [root at frick01 log]# gluster volume status
> Staging failed on frackib01.corvidtec.com. Please check log file for
> details.
>
>
>
> The logs are attached and my setup is shown below. Can anyone help?
>
> [root at frick01 log]# gluster volume info
>
> Volume Name: gfs
> Type: Replicate
> Volume ID: abc63b5c-bed7-4e3d-9057-00930a2d85d3
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp,rdma
> Bricks:
> Brick1: frickib01.corvidtec.com:/data/brick01/gfs
> Brick2: frackib01.corvidtec.com:/data/brick01/gfs
> Options Reconfigured:
> storage.owner-gid: 100
> server.allow-insecure: on
> performance.readdir-ahead: on
> server.event-threads: 4
> client.event-threads: 4
> David
>
>
>
> _______________________________________________
> Gluster-devel mailing listGluster-devel at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151023/1102cd11/attachment.html>
More information about the Gluster-users
mailing list