[Gluster-users] How to identify a files shard?

Krutika Dhananjay kdhananj at redhat.com
Mon Apr 25 04:36:51 UTC 2016


Lindsay,

Could you send logs from your setup: bricks, shds and client logs. Also
send in the gfids of these 8 shards.

-Krutika

On Mon, Apr 25, 2016 at 3:27 AM, Paul Cuzner <pcuzner at redhat.com> wrote:

>
>
> Just wondering how shards can silently be different across bricks in a
> replica? Lindsay caught this issue due to her due diligence taking on 'new'
> tech - and resolved the inconsistency, but tbh this shouldn't be an admin's
> job :(
>
>
>
> On Sun, Apr 24, 2016 at 7:06 PM, Krutika Dhananjay <kdhananj at redhat.com>
> wrote:
>
>> OK. Under normal circumstances it should have been possible to heal a
>> single file by issuing a lookup on it (==> stat on the file from the
>> mountpoint). But with shards this won't work. We take care not to expose
>> /.shard on the mountpoint, and as a result any attempt to issue lookup on a
>> shard from the mountpoint will be met with an 'operation not permitted'
>> error.
>>
>> -Krutika
>>
>> On Sun, Apr 24, 2016 at 11:42 AM, Lindsay Mathieson <
>> lindsay.mathieson at gmail.com> wrote:
>>
>>> On 24/04/2016 2:56 PM, Krutika Dhananjay wrote:
>>>
>>>> Nope, it's not necessary for them to all have the xattr.
>>>>
>>>
>>> Thats good :)
>>>
>>>
>>>> Do you see anything at least in .glusterfs/indices/dirty on all bricks?
>>>>
>>>
>>> I checked, dirty dir empty on all bricks
>>>
>>> I used diff3 to compare the checksums of the shards and it revealed that
>>> seven of the shards were the same on two bricks (vna & vng) and one of the
>>> shards was the same on two other bricks (vna & vnb). Fortunately none were
>>> different on all 3 bricks :)
>>>
>>> Using the checksum as a quorum I deleted all the singleton shards (7 on
>>> vnb, 1 on vng), touched the file owner and issule a "heal full". All 8
>>> shards were restored with matching checksums for the other two bricks. A
>>> rechack of the entire set of shards for the vm showed all 3 copies as
>>> identical and the VM itself is functioning normally.
>>>
>>> Its one way to manually heal up shard mismatches which gluster hasn't
>>> detected, if somewhat tedious. Its a method which lends itself to
>>> automation though.
>>>
>>>
>>> Cheers,
>>>
>>>
>>> --
>>> Lindsay Mathieson
>>>
>>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160425/ada6b234/attachment.html>


More information about the Gluster-users mailing list