[Gluster-users] Bitrot strange behavior
Cedric Lemarchand
yipikai7 at gmail.com
Wed Apr 18 18:20:46 UTC 2018
Hi Sweta,
Thanks, this drive me some more questions:
1. What is the reason of delaying signature creation ?
2. As a same file (replicated or dispersed) having different signature thought bricks is by definition an error, it would be good to triggered it during a scrub, or with a different tool. Is something like this planned ?
Cheers
—
Cédric Lemarchand
> On 18 Apr 2018, at 07:53, Sweta Anandpara <sanandpa at redhat.com> wrote:
>
> Hi Cedric,
>
> Any file is picked up for signing by the bitd process after the predetermined wait of 120 seconds. This default value is captured in the volume option 'features.expiry-time' and is configurable - in your case, it can be set to 0 or 1.
>
> Point 2 is correct. A file corrupted before the bitrot signature is generated will not be successfully detected by the scrubber. That would require admin/manual intervention to explicitly heal the corrupted file.
>
> -Sweta
>
> On 04/16/2018 10:42 PM, Cedric Lemarchand wrote:
>> Hello,
>>
>> I am playing around with the bitrot feature and have some questions:
>>
>> 1. when a file is created, the "trusted.bit-rot.signature” attribute
>> seems only created approximatively 120 seconds after its creations
>> (the cluster is idle and there is only one file living on it). Why ?
>> Is there a way to make this attribute generated at the same time of
>> the file creation ?
>>
>> 2. corrupting a file (adding a 0 locally on a brick) before the
>> creation of the "trusted.bit-rot.signature” do not provide any
>> warning: its signature is different than the 2 others copies on other
>> bricks. Starting a scrub did not show up anything. I would think that
>> Gluster compares signature between bricks for this particular use
>> cases, but it seems the check is only local, so a file corrupted
>> before it’s bitrot signature creation stay corrupted, and thus could
>> be served to clients whith bad data ?
>>
>> Gluster 3.12.8 on Debian Stretch, bricks on ext4.
>>
>> Volume Name: vol1
>> Type: Replicate
>> Volume ID: 85ccfaf2-5793-46f2-bd20-3f823b0a2232
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster-01:/data/brick1
>> Brick2: gluster-02:/data/brick2
>> Brick3: gluster-03:/data/brick3
>> Options Reconfigured:
>> storage.build-pgfid: on
>> performance.client-io-threads: off
>> nfs.disable: on
>> transport.address-family: inet
>> features.bitrot: on
>> features.scrub: Active
>> features.scrub-throttle: aggressive
>> features.scrub-freq: hourly
>>
>> Cheers,
>>
>> Cédric
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180418/8388e1ed/attachment.html>
More information about the Gluster-users
mailing list