[Gluster-users] Gluster client mount fails in mid flight with signum 15
Gabriel Lindeborg
gabriel.lindeborg at svenskaspel.se
Tue May 30 08:55:57 UTC 2017
Hello,
3.10.2
Initial mounting works fine, the fail comes a while after mounting.
This is the mnt.log for one of the mounts just before the fail:
/DAEMON/DEBUG [2017-05-30T09:17:45.371949+02:00] [] []
/DAEMON/INFO [2017-05-30T09:17:45.373441+02:00] [] []
/DAEMON/DEBUG [2017-05-30T09:17:45.373620+02:00] [] []
/DAEMON/INFO [2017-05-30T09:17:45.374734+02:00] [] []
/DAEMON/INFO [2017-05-30T09:17:45.374892+02:00] [] []
/DAEMON/DEBUG [2017-05-30T09:17:45.375301+02:00] [] []
/DAEMON/INFO [2017-05-30T09:17:45.407628+02:00] [] []
[2017-05-30 07:17:48.520770] I [MSGID: 108031] [afr-common.c:2340:afr_local_discovery_cbk] 0-alfresco-replicate-0: selecting local read_child alfresco-client-2
/DAEMON/INFO [2017-05-30T09:17:54.642644+02:00] [] []
/DAEMON/INFO [2017-05-30T09:17:54.651476+02:00] [] []
/DAEMON/INFO [2017-05-30T09:17:54.656808+02:00] [] []
[2017-05-30 07:17:45.371169] D [MSGID: 0] [options.c:1237:xlator_option_reconf_bool] 0-alfresco-dht: option lock-migration using set value off
[2017-05-30 07:17:45.371218] D [MSGID: 0] [dht-shared.c:363:dht_init_regex] 0-alfresco-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$
[2017-05-30 07:17:45.371225] D [MSGID: 0] [options.c:1100:xlator_reconfigure_rec] 0-alfresco-dht: reconfigured
[2017-05-30 08:00:47.932460] W [glusterfsd.c:1332:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f1159819dc5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f115aeb1fd5] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f115aeb1dfb] ) 0-: received signum
(15), shutting down
Cheers
Gabbe
30 maj 2017 kl. 10:24 skrev Sunil Kumar Heggodu Gopala Acharya <sheggodu at redhat.com<mailto:sheggodu at redhat.com>>:
Hi Gabriel,
Which gluster version are your running? Are you able to fuse mount the volume?
Please share the failure logs.
Regards,
Sunil kumar Acharya
Senior Software Engineer
Red Hat
<https://www.redhat.com/>
T: +91-8067935170<http://redhatemailsignature-marketing.itos.redhat.com/>
[https://www.redhat.com/profiles/rh/themes/redhatdotcom/img/logo-red-hat-black.png]<https://red.ht/sig>
TRIED. TESTED. TRUSTED.<https://redhat.com/trusted>
On Tue, May 30, 2017 at 1:04 PM, Gabriel Lindeborg <gabriel.lindeborg at svenskaspel.se<mailto:gabriel.lindeborg at svenskaspel.se>> wrote:
Hello All
We’ve have a problem with cluster client mounts fail in mid run with this in the log
glusterfsd.c:1332:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f640c8b3dc5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f640df4bfd5] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f640df4bdfb] ) 0-: received signum (15), shutting down.
We’ve tried running debug but have not found anything suspicious happening at the time of the failures
We’ve searched the inter web but can not find anyone else having the same problem in mid flight
The clients have four mounts of volumes from the same server, all mounts fail simultaneously
Peer status looks ok
Volume status looks ok
Volume info looks like this:
Volume Name: GLUSTERVOLUME
Type: Replicate
Volume ID: ca7af017-4f0f-44cc-baf6-43168eed0748
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: GLUSTERSERVER1:/gluster/GLUSTERVOLUME/brick
Brick2: GLUSTERSERVER2:/gluster/GLUSTERVOLUME/brick
Options Reconfigured:
transport.address-family: inet
cluster.self-heal-daemon: enable
nfs.disable: on
server.allow-insecure: on
client.bind-insecure: on
network.ping-timeout: 5
features.bitrot: on
features.scrub: Active
features.scrub-freq: weekly
Any ideas?
Cheers
Gabbe
AB SVENSKA SPEL
621 80 Visby
Norra Hansegatan 17, Visby
Växel: +4610-120 00 00
https://svenskaspel.se<https://svenskaspel.se/>
Please consider the environment before printing this email
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170530/72c56b51/attachment.html>
More information about the Gluster-users
mailing list