[Gluster-users] dht log entries in fuse client after successful expansion/rebalance

Tim Robinson Tim.Robinson at humedica.com
Fri Mar 9 18:11:51 UTC 2012


Thanks for breaking it down Jeff.  I can live with a remount on the rare
occasion we may resize just wanted to make sure my test environment was
exhibiting the expected behavior.

Tim


On 3/9/12 12:11 PM, "Jeff Darcy" <jdarcy at redhat.com> wrote:

>On Fri, 9 Mar 2012 15:44:48 +0000
>Tim Robinson <Tim.Robinson at humedica.com> wrote:
>
>> mismatching layouts for .../bench
>> subvol: bfd-replicate-2;
>>	inode layout - 0 - 0;
>>	disk layout - 2863311530 - 4294967295
>> mismatching layouts for .../bench
>> subvol: bfd-replicate-0;
>>	inode layout - 0 - 2147483646;
>>	disk layout - 0 - 1431655764
>> mismatching layouts .../bench
>> subvol: bfd-replicate-0;
>>	inode layout - 2147483647 - 4294967295;
>>	disk layout - 0 - 1431655764
>> mismatching layouts for .../debug
>> subvol: bfd-replicate-1;
>>	inode layout - 0 - 2147483646;
>>	disk layout - 1431655765 - 2863311529
>
>It looks like either your mail program or mine scrambled these lines a
>bit, so I've edited to show the most important bits.  Basically there
>are three messages for .../bench, corresponding to the three
>subvolumes, and then messages for .../debug on two out of three
>subvolumes.  I'd be worried if we saw these repeated for the same
>directory on the same subvolume, but (so far) that doesn't seem to be
>the case.  AFAICT the problem is just that the layout-revalidation code
>is being a lot more verbose than necessary.  This activity should tail
>off pretty quickly.
>
>> I have tried doing a self-heal on the client and rebalancing the
>> volume again but the messages persist.  After remounting the volume
>> the messages stop.  The rebalance reports success in adjusting the
>> layout and redistributing files and I can see from looking at the
>> bricks that pre-expansion files have been moved to the new brick and
>> post expansion files are going to all three but the excessive client
>> logging effects performance and would take up lots of space when
>> under heavy use.  Does anybody know what might be happening here and
>> how I can avoid these messages after expansion without remounting or
>> turning off/down client logging?
>
>We support dynamic reconfiguration on the servers, but not on the
>clients, so without remounting there's no good way to reduce the client
>log level.  I'd do it with gdb, but I don't think I can recommend that
>for the sort of environment where a remount would be precluded.




More information about the Gluster-users mailing list