[Gluster-users] Gluster client mount fails in mid flight with signum 15

Sunil Kumar Heggodu Gopala Acharya sheggodu at redhat.com
Tue May 30 12:45:24 UTC 2017


Hi Gabriel,

I am not able to hit the issue mentioned on my setup.

Please share the log files(both brick and client log files) from your
setup. It would be great if you can share the details about steps you
followed to hit the issue.


Regards,

Sunil kumar Acharya

Senior Software Engineer

Red Hat

<https://www.redhat.com>

T: +91-8067935170 <http://redhatemailsignature-marketing.itos.redhat.com/>

<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>


On Tue, May 30, 2017 at 3:30 PM, Gabriel Lindeborg <
gabriel.lindeborg at svenskaspel.se> wrote:

> Hello
>
> A manual mount failed the same way
>
> Cheers
> Gabbe
>
> 30 maj 2017 kl. 10:24 skrev Sunil Kumar Heggodu Gopala Acharya <
> sheggodu at redhat.com>:
>
> Hi Gabriel,
>
> Which gluster version are your running? Are you able to fuse mount the
> volume?
>
> Please share the failure logs.
>
> Regards,
> Sunil kumar Acharya
>
> Senior Software Engineer
> Red Hat
>
> <https://www.redhat.com/>
>
> T: +91-8067935170 <http://redhatemailsignature-marketing.itos.redhat.com/>
>
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
>
> On Tue, May 30, 2017 at 1:04 PM, Gabriel Lindeborg <gabriel.lindeborg@
> svenskaspel.se> wrote:
>
>> Hello All
>>
>> We’ve have a problem with cluster client mounts fail in mid run with this
>> in the log
>> glusterfsd.c:1332:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5)
>> [0x7f640c8b3dc5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
>> [0x7f640df4bfd5] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
>> [0x7f640df4bdfb] ) 0-: received signum (15), shutting down.
>>
>> We’ve tried running debug but have not found anything suspicious
>> happening at the time of the failures
>> We’ve searched the inter web but can not find anyone else having the same
>> problem in mid flight
>>
>> The clients have four mounts of volumes from the same server, all mounts
>> fail simultaneously
>> Peer status looks ok
>> Volume status looks ok
>> Volume info looks like this:
>> Volume Name: GLUSTERVOLUME
>> Type: Replicate
>> Volume ID: ca7af017-4f0f-44cc-baf6-43168eed0748
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: GLUSTERSERVER1:/gluster/GLUSTERVOLUME/brick
>> Brick2: GLUSTERSERVER2:/gluster/GLUSTERVOLUME/brick
>> Options Reconfigured:
>> transport.address-family: inet
>> cluster.self-heal-daemon: enable
>> nfs.disable: on
>> server.allow-insecure: on
>> client.bind-insecure: on
>> network.ping-timeout: 5
>> features.bitrot: on
>> features.scrub: Active
>> features.scrub-freq: weekly
>>
>> Any ideas?
>>
>> Cheers
>> Gabbe
>>
>>
>>
>> AB SVENSKA SPEL
>> 621 80 Visby
>> Norra Hansegatan 17, Visby
>> Växel: +4610-120 00 00
>> https://svenskaspel.se
>>
>> Please consider the environment before printing this email
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170530/58cb7209/attachment.html>


More information about the Gluster-users mailing list