[Gluster-users] Inconsistent volume
Andy Pace
APace at singlehop.com
Mon Jul 26 20:37:17 UTC 2010
I too would like to know how to "sync up" a replicated pair of bricks. Right now I've got a slight difference between the 2...
Scale-n-defrag.sh didn't do much either. Looking forward to some help :)
13182120616 139057220 12362648984 2% /export
Vs
13181705324 139057208 12362233500 2% /export
Granted, it's a very small amount (and the total availalble is slightly different), but the amount used should be the same, no?
-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Steve Wilson
Sent: Monday, July 26, 2010 3:35 PM
To: gluster-users at gluster.org
Subject: [Gluster-users] Inconsistent volume
I have a volume that is distributed and replicated. While deleting a directory structure on the mounted volume, I also restarted the GlusterFS daemon on one of the replicated servers. After the "rm -rf"
command completed, it complained that it couldn't delete a directory because it wasn't empty. But from the perspective of the mounted volume it appeared empty. Looking at the individual bricks, though, I could see that there were files remaining in this directory.
My question: what is the proper way to correct this problem and bring the volume back to a consistent state? I've tried using the "ls -alR"
command to force a self-heal but for some reason this always causes the volume to become unresponsive from any client after 10 minutes or so.
Some clients/servers are running version 3.0.4 while the others are running 3.0.5.
Thanks!
Steve
--
Steven M. Wilson, Systems and Network Manager Markey Center for Structural Biology Purdue University
(765) 496-1946
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list