[Gluster-users] Slow healing times on large cinder and nova volumes

Pranith Kumar Karampuri pkarampu at redhat.com
Tue Apr 22 01:43:35 UTC 2014


Could you attach log files please.
You said the bricks are replaced. In case of brick-replacement, index based self-heal doesn't work so full self-heal needs to be triggered using "gluster volume heal <volname> full". Could you confirm if that command is issued?

Pranith
----- Original Message -----
> From: "Larry Schmid" <lschmid at io.com>
> To: gluster-users at gluster.org
> Sent: Tuesday, April 22, 2014 4:07:39 AM
> Subject: [Gluster-users] Slow healing times on large cinder and nova volumes
> 
> 
> 
> Hi guys,
> 
> 
> 
> x-posted from irc.
> 
> 
> 
> We're having an issue on our prod openstack environment, which is backed by
> gluster using two replicas (I know. I wasn't given a choice.)
> 
> 
> 
> We lost storage on one of the replica servers and so had to replace failed
> bricks. The heal operation on Cinder and Nova volumes is coming up on the
> two-week mark and it seems as if it will never catch up and finish.
> 
> 
> 
> Nova heal info shows a constantly fluctuating list with multiple heals on
> many of the files, as if it's trying to keep up with deltas. It’s at 860GB
> of 1.1TB.
> 
> 
> 
> Cinder doesn't really seem to progress. It's at about 1.9T out of 6T
> utilized, though the total sparse file size totals about 30T. It also has
> done multiple heals on the some files.
> 
> 
> 
> I seem to be down to just watching it spin. Any help or tips?
> 
> 
> 
> Thanks,
> 
> 
> 
> Larry Schmid | Principal Cloud Engineer
> 
> 
> 
> IO
> 
> 
> 
> M +1.602.316.8639 | O +1.602.273.5431
> 
> E lschmid at io.com | io.com
> 
> 
> 
> 
> Founded in 2007, IO is a worldwide leader in software defined data center
> technology, services and solutions that enable businesses and governments to
> intelligently control their information.
> 
> The communication contained in this e-mail is confidential and is intended
> only for the named recipient(s) and may contain information that is
> privileged, proprietary, attorney work product or exempt from disclosure
> under applicable law. If you have received this message in error, or are not
> the named recipient(s), please note that any form of distribution, copying
> or use of this communication or the information in it is strictly prohibited
> and may be unlawful. Please immediately notify the sender of the error, and
> delete this communication including any attached files from your system.
> Thank you for your cooperation.
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list