[Gluster-users] errors after removing bricks

Eyal Marantenboim eyal at theserverteam.com
Thu Apr 18 10:45:53 UTC 2013


I had a 4 nodes in replication setup.
After removing one of the nodes with:

gluster> volume remove-brick images_1 replica 3  vmhost5:/exports/1
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Remove Brick commit force successful

I started to see a high cpu usage on 2 of the remaining boxes, and this
error in glustershd.log:

[2013-04-18 12:40:47.160559] E
[afr-self-heald.c:685:_link_inode_update_loc] 0-images_1-replicate-0: inode
link failed on the inode (00000000-0000-0000-0000-000000000000)
[2013-04-18 12:41:55.784510] I
[afr-self-heald.c:1082:afr_dir_exclusive_crawl] 0-images_1-replicate-0:
Another crawl is in progress for images_1-client-1

any ideas?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130418/f23b2794/attachment.html>

More information about the Gluster-users mailing list