[Gluster-users] self service snapshot access broken with 3.7.11

FNU Raghavendra Manjunath rabhat at redhat.com
Tue Apr 26 21:33:23 UTC 2016


Hi,

Thanks for the snapd log. Can you please attach all the gluster logs? i.e.
the contents of /var/log/glusterfs.

Regards,
Raghavendra


On Mon, Apr 25, 2016 at 11:27 AM, Alastair Neil <ajneil.tech at gmail.com>
wrote:

> attached compressed log
>
> On 22 April 2016 at 20:15, FNU Raghavendra Manjunath <rabhat at redhat.com>
> wrote:
>
>>
>> Hi Alastair,
>>
>> Can you please provide the snap daemon logs. It is present in
>> /var/log/glusterfs/snaps/snapd.log.
>>
>> Provide the snapd logs of the node from which you have mounted the volume
>> (i.e. the node whose ip address/hostname you have given while mounting the
>> volume).
>>
>> Regards,
>> Raghavendra
>>
>>
>>
>> On Fri, Apr 22, 2016 at 5:19 PM, Alastair Neil <ajneil.tech at gmail.com>
>> wrote:
>>
>>> I just upgraded my cluster to 3.7.11 from 3.7.10 and access to the
>>> .snaps directories now fail with
>>>
>>> bash: cd: .snaps: Transport endpoint is not connected
>>>
>>>
>>> in the volume log file on the client I see:
>>>
>>> 016-04-22 21:08:28.005854] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
>>>> 2-homes-snapd-client: changing port to 49493 (from 0)
>>>> [2016-04-22 21:08:28.009558] E [socket.c:2278:socket_connect_finish]
>>>> 2-homes-snapd-client: connection to xx.xx.xx.xx.xx:49493 failed (No route
>>>> to host)
>>>
>>>
>>> I'm quite perplexed, now it's not a network issue or DNS as far as I can
>>> tell, the glusterfs client is working fine, and the gluster servers all
>>> resolve ok.  It seems to be happening on all the clients I have tried
>>> different systems with 3.7.8, 3.7.10, and 3.7.11 version clients and see
>>> the same failure on all of them.
>>>
>>> On the servers the snapshots are being taken as expected and they are
>>> started:
>>>
>>> Snapshot                  :
>>>> Scheduled-Homes_Hourly-homes_GMT-2016.04.22-16.00.01
>>>> Snap UUID                 : 91ba50b0-d8f2-4135-9ea5-edfdfe2ce61d
>>>> Created                   : 2016-04-22 16:00:01
>>>> Snap Volumes:
>>>> Snap Volume Name          : 5170144102814026a34f8f948738406f
>>>> Origin Volume name        : homes
>>>> Snaps taken for homes      : 16
>>>> Snaps available for homes  : 240
>>>> Status                    : Started
>>>
>>>
>>>
>>> the homes volume is replica 3 all the peers are up and so are all the
>>> bricks and services:
>>>
>>> glv status homes
>>>> Status of volume: homes
>>>> Gluster process                             TCP Port  RDMA Port  Online
>>>>  Pid
>>>>
>>>> ------------------------------------------------------------------------------
>>>> Brick gluster-2:/export/brick2/home         49171     0          Y
>>>>   38298
>>>> Brick gluster0:/export/brick2/home          49154     0          Y
>>>>   23519
>>>> Brick gluster1.vsnet.gmu.edu:/export/brick2
>>>> /home                                       49154     0          Y
>>>>   23794
>>>> Snapshot Daemon on localhost                49486     0          Y
>>>>   23699
>>>> NFS Server on localhost                     2049      0          Y
>>>>   23486
>>>> Self-heal Daemon on localhost               N/A       N/A        Y
>>>>   23496
>>>> Snapshot Daemon on gluster-2                49261     0          Y
>>>>   38479
>>>> NFS Server on gluster-2                     2049      0          Y
>>>>   39640
>>>> Self-heal Daemon on gluster-2               N/A       N/A        Y
>>>>   39709
>>>> Snapshot Daemon on gluster1                 49480     0          Y
>>>>   23982
>>>> NFS Server on gluster1                      2049      0          Y
>>>>   23766
>>>> Self-heal Daemon on gluster1                N/A       N/A        Y
>>>>   23776
>>>>
>>>> Task Status of Volume homes
>>>>
>>>> ------------------------------------------------------------------------------
>>>> There are no active volume tasks
>>>
>>>
>>> I'd appreciate any ideas about troubleshooting this.  I tried disable
>>> .snaps access on the volume and re-enabling it but is made no difference.
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160426/4f4f0f44/attachment.html>


More information about the Gluster-users mailing list