[Gluster-users] lingering <gfid:*> entries in volume heal, gluster 3.6.3

Ravishankar N ravishankar at redhat.com
Sat Jul 16 04:00:14 UTC 2016


On 07/15/2016 10:56 PM, Kingsley wrote:
> On Fri, 2016-07-15 at 22:24 +0530, Ravishankar N wrote:
>> On 07/15/2016 09:55 PM, Kingsley wrote:
>>> This has revealed something. I'm now seeing lots of lines like this in
>>> the shd log:
>>>
>>> [2016-07-15 16:20:51.098152] D [afr-self-heald.c:516:afr_shd_index_sweep] 0-callrec-replicate-0: got entry: eaa43674-b1a3-4833-a946-de7b7121bb88
>>> [2016-07-15 16:20:51.099346] D [client-rpc-fops.c:1523:client3_3_inodelk_cbk] 0-callrec-client-2: remote operation failed: Stale file handle
>>> [2016-07-15 16:20:51.100683] D [client-rpc-fops.c:2686:client3_3_opendir_cbk] 0-callrec-client-2: remote operation failed: Stale file handle. Path: <gfid:eaa43674-b1a3-4833-a946-de7b7121bb88> (eaa43674-b1a3-4833-a946-de7b7121bb88)
>> Looks like the files are not present at all in client-2 which is why you
>> see these messages.
>> Find out the files/directory names corresponding to these gfids from one
>> of the healthy bricks and see if they are present in client-2 as well.
>> If not try accessing them from the mount. That should create any missing
>> entries in client-2. Then launch heal again.
>>
>> Hope this helps.
>> Ravi
> OK, I think I'll script something to find those; it could take many
> hours to search through to find them.
>
> Meanwhile, is it safe for me to send a HUP to the glusterd process? And
> if so, might that make the shd re-establish its 0-callrec-client-2
> handle?
It is already connected to client-2, which is why you got ESTALE instead 
of ENOTCONN. If you want to restart shd, do a gluster volume start force.
-Ravi
>
> Cheers,
> Kingsley.
>



More information about the Gluster-users mailing list