[Gluster-users] dht log entries in fuse client after successful expansion/rebalance

Tim Robinson Tim.Robinson at humedica.com
Fri Mar 9 15:44:48 UTC 2012


Hi

I'm using Gluster 3.2.5.  After expanding a 2x2 Distributed-Replicate
volume to 3x2 and performing a full rebalance fuse clients log the
following messages for every directory access:

[2012-03-08 10:53:56.953030] I [dht-common.c:524:dht_revalidate_cbk]
1-bfd-dht: mismatching layouts for /linux-3.2.9/tools/power/cpupower/bench
[2012-03-08 10:53:56.953065] I [dht-layout.c:682:dht_layout_dir_mismatch]
1-bfd-dht: subvol: bfd-replicate-2; inode layout - 0 - 0; disk layout -
2863311530 - 4294967295
[2012-03-08 10:53:56.953080] I [dht-common.c:524:dht_revalidate_cbk]
1-bfd-dht: mismatching layouts for /linux-3.2.9/tools/power/cpupower/bench
[2012-03-08 10:53:56.953218] I [dht-layout.c:682:dht_layout_dir_mismatch]
1-bfd-dht: subvol: bfd-replicate-0; inode layout - 0 - 2147483646; disk
layout - 0 - 1431655764
[2012-03-08 10:53:56.953239] I [dht-common.c:524:dht_revalidate_cbk]
1-bfd-dht: mismatching layouts for /linux-3.2.9/tools/power/cpupower/bench
[2012-03-08 10:53:56.966991] I [dht-layout.c:682:dht_layout_dir_mismatch]
1-bfd-dht: subvol: bfd-replicate-0; inode layout - 2147483647 -
4294967295; disk layout - 0 - 1431655764
[2012-03-08 10:53:56.967017] I [dht-common.c:524:dht_revalidate_cbk]
1-bfd-dht: mismatching layouts for /linux-3.2.9/tools/power/cpupower/debug
[2012-03-08 10:53:56.967118] I [dht-layout.c:682:dht_layout_dir_mismatch]
1-bfd-dht: subvol: bfd-replicate-1; inode layout - 0 - 2147483646; disk
layout - 1431655765 - 2863311529

I have tried doing a self-heal on the client and rebalancing the volume
again but the messages persist.  After remounting the volume the messages
stop.  The rebalance reports success in adjusting the layout and
redistributing files and I can see from looking at the bricks that
pre-expansion files have been moved to the new brick and post expansion
files are going to all three but the excessive client logging effects
performance and would take up lots of space when under heavy use.  Does
anybody know what might be happening here and how I can avoid these
messages after expansion without remounting or turning off/down client
logging?

Thanks.
Tim








More information about the Gluster-users mailing list