[Gluster-users] Failure after update

ousmane sanogo sanoousmane at gmail.com
Tue Feb 16 10:49:46 UTC 2016


i update from 3.7.6 to 3.7.8
I am on Centos 7.2

[root at compute1 ~]# gluster volume status
Status of volume: vol_cinder
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 172.16.10.2:/glusterfs-cinder         49156     0          Y
2515
Brick 172.16.10.3:/glusterfs-cinder         49156     0          Y
2235
NFS Server on localhost                     2049      0          Y
2492
Self-heal Daemon on localhost               N/A       N/A        Y
2497
NFS Server on compute2                      2049      0          Y
2224
Self-heal Daemon on compute2                N/A       N/A        Y
2264

Task Status of Volume vol_cinder
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: vol_glances
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 172.16.10.2:/glusterfs-glances        49153     0          Y
2521
Brick 172.16.10.3:/glusterfs-glances        49153     0          Y
2236
NFS Server on localhost                     2049      0          Y
2492
Self-heal Daemon on localhost               N/A       N/A        Y
2497
NFS Server on compute2                      2049      0          Y
2224
Self-heal Daemon on compute2                N/A       N/A        Y
2264

Task Status of Volume vol_glances
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: vol_instances
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 172.16.10.2:/glusterfs-instances      49152     0          Y
2507
Brick 172.16.10.3:/glusterfs-instances      49152     0          Y
2265
NFS Server on localhost                     2049      0          Y
2492
Self-heal Daemon on localhost               N/A       N/A        Y
2497
NFS Server on compute2                      2049      0          Y
2224
Self-heal Daemon on compute2                N/A       N/A        Y
2264

Task Status of Volume vol_instances
------------------------------------------------------------------------------
There are no active volume tasks

2016-02-16 9:31 GMT+00:00 Niels de Vos <ndevos at redhat.com>:

> On Tue, Feb 16, 2016 at 08:35:05AM +0000, ousmane sanogo wrote:
> > Hello i .update my gluster node yesterday
> > I am using cinder openstack with gluster
> >  And i have this warning after update :
> >
> > warning:
> >
> /var/lib/glusterd/vols/vol_cinder/vol_cinder.172.16.10.2.glusterfs-cinder.vol
> > saved as
> >
> /var/lib/glusterd/vols/vol_cinder/vol_cinder.172.16.10.2.glusterfs-cinder.vol.rpmsave\nwarning:
> > /var/lib/glusterd/vols/vol_cinder/vol_cinder.tcp-fuse.vol saved as
> >
> /var/lib/glusterd/vols/vol_cinder/vol_cinder.tcp-fuse.vol.rpmsave\nwarning:
> >
> /var/lib/glusterd/vols/vol_cinder/vol_cinder.172.16.10.3.glusterfs-cinder.vol
> > saved as
> >
> /var/lib/glusterd/vols/vol_cinder/vol_cinder.172.16.10.3.glusterfs-cinder.vol.rpmsave\nwarning:
> > /var/lib/glusterd/vols/vol_cinder/trusted-vol_cinder.tcp-fuse.vol saved
> as
> >
> /var/lib/glusterd/vols/vol_cinder/trusted-vol_cinder.tcp-fuse.vol.rpmsave\nwarning:
> >
> /var/lib/glusterd/vols/vol_instances/vol_instances.172.16.10.2.glusterfs-instances.vol
> > saved as
> >
> /var/lib/glusterd/vols/vol_instances/vol_instances.172.16.10.2.glusterfs-instances.vol.rpmsave
> > Warning: glusterd.service changed on disk. Run 'systemctl daemon-reload'
> to
> > reload units.
> >
> > I run "systemctl daemon-reload" on node.
> >
> > On node 1:
> > [root at compute1 ~]# tail
> > /var/log/glusterfs/var-lib-nova-mnt-7e2fea33428149438b876dd122157f27.log
> -f
> > [2016-02-15 19:56:44.459473] I [MSGID: 114057]
> > [client-handshake.c:1437:select_server_supported_programs]
> > 0-vol_cinder-client-0: Using Program GlusterFS 3.3, Num (1298437),
> Version
> > (330)
> > [2016-02-15 19:56:44.459644] I [MSGID: 114046]
> > [client-handshake.c:1213:client_setvolume_cbk] 0-vol_cinder-client-0:
> > Connected to vol_cinder-client-0, attached to remote volume
> > '/glusterfs-cinder'.
> > [2016-02-15 19:56:44.459658] I [MSGID: 114047]
> > [client-handshake.c:1224:client_setvolume_cbk] 0-vol_cinder-client-0:
> > Server and Client lk-version numbers are not same, reopening the fds
> > [2016-02-15 19:56:44.459666] I [MSGID: 114042]
> > [client-handshake.c:1056:client_post_handshake] 0-vol_cinder-client-0: 1
> > fds open - Delaying child_up until they are re-opened
> > [2016-02-15 19:56:44.459882] I [MSGID: 114041]
> > [client-handshake.c:678:client_child_up_reopen_done]
> 0-vol_cinder-client-0:
> > last fd open'd/lock-self-heal'd - notifying CHILD-UP
> > [2016-02-15 19:56:44.459944] I [MSGID: 114035]
> > [client-handshake.c:193:client_set_lk_version_cbk] 0-vol_cinder-client-0:
> > Server lk version = 1
> > [2016-02-15 19:57:32.625556] I [MSGID: 108031]
> > [afr-common.c:1782:afr_local_discovery_cbk] 0-vol_cinder-replicate-0:
> > selecting local read_child vol_cinder-client-0
> > [2016-02-15 20:08:23.876756] I [fuse-bridge.c:4984:fuse_thread_proc]
> > 0-fuse: unmounting /var/lib/nova/mnt/7e2fea33428149438b876dd122157f27
> > [2016-02-15 20:08:23.918397] W [glusterfsd.c:1236:cleanup_and_exit]
> > (-->/lib64/libpthread.so.0(+0x7dc5) [0x7fee5b957dc5]
> > -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7fee5cfc2855]
> > -->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7fee5cfc26d9] ) 0-:
> > received signum (15), shutting down
> > [2016-02-15 20:08:23.918424] I [fuse-bridge.c:5683:fini] 0-fuse:
> Unmounting
> > '/var/lib/nova/mnt/7e2fea33428149438b876dd122157f27'.
> >
> >
> > [root at compute1 ~]# tail /var/log/glusterfs/var-lib-nova-instances.log -f
> > [2016-02-16 08:32:32.292641] W [fuse-bridge.c:2292:fuse_writev_cbk]
> > 0-glusterfs-fuse: 62606120: WRITE => -1 (Noeud final de transport n'est
> pas
> > connecté)
> > [2016-02-16 08:32:32.292700] W [fuse-bridge.c:2292:fuse_writev_cbk]
> > 0-glusterfs-fuse: 62606122: WRITE => -1 (Noeud final de transport n'est
> pas
> > connecté)
> > [2016-02-16 08:32:32.292756] W [fuse-bridge.c:2292:fuse_writev_cbk]
> > 0-glusterfs-fuse: 62606124: WRITE => -1 (Noeud final de transport n'est
> pas
> > connecté)
> >
> > On node 2
> >
> > [root at compute2 ~]# tail
> > /var/log/glusterfs/var-lib-nova-mnt-7e2fea33428149438b876dd122157f27.log
> -f
> > [2016-02-15 19:56:47.042442] W [fuse-bridge.c:2292:fuse_writev_cbk]
> > 0-glusterfs-fuse: 15109968: WRITE => -1 (Transport endpoint is not
> > connected)
> > [2016-02-15 19:56:47.047263] W [fuse-bridge.c:2292:fuse_writev_cbk]
> > 0-glusterfs-fuse: 15109970: WRITE => -1 (Transport endpoint is not
> > connected)
> > [2016-02-15 19:56:47.047339] W [fuse-bridge.c:2292:fuse_writev_cbk]
> > 0-glusterfs-fuse: 15109972: WRITE => -1 (Transport endpoint is not
> > connected)
> > [2016-02-15 19:57:03.118138] I [MSGID: 108031]
> > [afr-common.c:1782:afr_local_discovery_cbk] 0-vol_cinder-replicate-0:
> > selecting local read_child vol_cinder-client-1
> > [2016-02-15 20:07:19.303007] W [fuse-bridge.c:1282:fuse_err_cbk]
> > 0-glusterfs-fuse: 15109995: FSYNC() ERR => -1 (Transport endpoint is not
> > connected)
> > [2016-02-15 20:07:19.318493] W [fuse-bridge.c:1282:fuse_err_cbk]
> > 0-glusterfs-fuse: 15109996: FSYNC() ERR => -1 (Transport endpoint is not
> > connected)
> > [2016-02-15 20:07:19.318601] W [fuse-bridge.c:1282:fuse_err_cbk]
> > 0-glusterfs-fuse: 15109997: FLUSH() ERR => -1 (Transport endpoint is not
> > connected)
> > [2016-02-15 20:07:20.264111] I [fuse-bridge.c:4984:fuse_thread_proc]
> > 0-fuse: unmounting /var/lib/nova/mnt/7e2fea33428149438b876dd122157f27
> > [2016-02-15 20:07:20.264361] W [glusterfsd.c:1236:cleanup_and_exit]
> > (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f853d29fdc5]
> > -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f853e90a855]
> > -->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f853e90a6d9] ) 0-:
> > received signum (15), shutting down
> > [2016-02-15 20:07:20.264381] I [fuse-bridge.c:5683:fini] 0-fuse:
> Unmounting
> > '/var/lib/nova/mnt/7e2fea33428149438b876dd122157f27'.
> >
> >
> > I restart glusterd, glusterfsd but i can't mount
> >   /var/lib/nova/mnt/7e2fea33428149438b876dd122157f27
>
> Some things that are missing:
> - What version of glusterfs packages were upgraded to what version?
> - Which OS/distribution?
> - does 'gluster volume status' show any missing processes?
>
> If not all brick processes are running, you probably should check the
> logs of those processes (/var/log/glusterfs/bricks/*.log).
>
> The messages that you posted suggest that some (or all?) of the bricks
> are not reachable. This causes the mounting to fail.
>
> HTH,
> Niels
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160216/3704f8bd/attachment.html>


More information about the Gluster-users mailing list