[Bugs] [Bug 1348894] New: gluster volume heal info keep reports " Volume heal failed"

bugzilla at redhat.com bugzilla at redhat.com
Wed Jun 22 09:23:03 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1348894

            Bug ID: 1348894
           Summary: gluster volume heal info keep reports "Volume heal
                    failed"
           Product: GlusterFS
           Version: 3.6.3
         Component: replicate
          Severity: medium
          Assignee: bugs at gluster.org
          Reporter: atalur at redhat.com
                CC: biryulini at gmail.com, bugs at gluster.org,
                    craigyk at nimgs.com, dwilson at customink.com,
                    glusterbugs at louiszuckerman.com, jms.crsn at gmail.com,
                    joe at julianfamily.org, mailbox at s19n.net,
                    pauyeung at shopzilla.com, pkarampu at redhat.com,
                    roger.lehmann at marktjagd.de
        Depends On: 1113778



+++ This bug was initially created as a clone of Bug #1113778 +++

Description of problem:
gluster volume heal info keep reports "Volume heal failed" even after a fresh
install of gluster 3.5.1 and new created replicated volume.


Version-Release number of selected component (if applicable):
gluster 3.5.1

How reproducible:
volume create <vol> replica 2 storage1:/brick1 storage2:/brick2

Steps to Reproduce:
1. volume create <vol> replica 2 storage1:/brick1 storage2:/brick2
2. volume heal <vol> info


Actual results:
Volume heal failed

Expected results:
Brick: storage1:/brick1
Number of entries: 0

Additional info:

--- Additional comment from Peter Auyeung on 2014-06-26 18:12:31 EDT ---

Volume appeared works fine and files are able to to heal under info healed.

Need to confirm if that's only cosmetic.

--- Additional comment from Peter Auyeung on 2014-06-26 18:32:26 EDT ---

cli.log give out the following when run heal info:
[2014-06-26 22:31:34.461244] W [cli-rl.c:106:cli_rl_process_line] 0-glusterfs:
failed to process line

--- Additional comment from Pranith Kumar K on 2014-06-26 22:19:22 EDT ---

Peter I just tried it on my machine and it work. Krutika (Another developer)
was wondering if you have readline installed on your machine.

What ouput do you get when you execute:
root at localhost - ~ 
07:46:45 :) ⚡ rpm -qa | grep readline
readline-devel-6.2-8.fc20.x86_64
readline-6.2-8.fc20.x86_64

Pranith

--- Additional comment from Peter Auyeung on 2014-06-26 22:49:04 EDT ---

I am on ubuntu and do have readline installed

# dpkg -l | grep readline
ii  libreadline5                     5.2-11                            GNU
readline and history libraries, run-time libraries
ii  libreadline6                     6.2-8                             GNU
readline and history libraries, run-time libraries
ii  readline-common                  6.2-8                             GNU
readline and history libraries, common files

--- Additional comment from Pranith Kumar K on 2014-06-26 22:54:13 EDT ---

Stupid question but let me ask anyway. In the steps to reproduce. There is no
step about starting the volume. Did you start the volume? Does it give the info
after starting the volume?

What version of Ubuntu are you using. I can probably install a VM and test it
once. Also give me the location of debs you used for installing.

--- Additional comment from Peter Auyeung on 2014-06-26 23:04:21 EDT ---

I am on 12.04

# cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=12.04
DISTRIB_CODENAME=precise
DISTRIB_DESCRIPTION="Ubuntu 12.04.4 LTS"
NAME="Ubuntu"
VERSION="12.04.4 LTS, Precise Pangolin"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu precise (12.04.4 LTS)"
VERSION_ID="12.04"

I am using semiosis's ppa

add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.5

Thanks
Peter

--- Additional comment from Peter Auyeung on 2014-06-26 23:05:25 EDT ---

and yes i did started the volume and have live traffic over nfs

--- Additional comment from  on 2014-06-30 19:21:05 EDT ---

I am having a similar problem.  I had a brick fail in a x3 replica set, but
while everything seems to have recovered I still get a "Volume heal failed"
when running gluster volume heal <vol> info.

gluster volume heal <vol> info heal-failed 
and
gluster volume heal <vol> statistics

report no failures.

I'm running 3.5.1

--- Additional comment from Joe Julian on 2014-07-07 11:38:48 EDT ---

According to Pranith, 3.5 ubuntu debs don't have /usr/bin/glfsheal binary.
Semiosis will repackage asap.

--- Additional comment from  on 2014-07-17 13:25:03 EDT ---

Has this been done?  I've been checking for updates to the repo and haven't
seen any.

--- Additional comment from Igor Biryulin on 2015-02-18 13:48:18 EST ---

This problem exist on gluster 3.6.2.
OS: Ubuntu 12.04.5 LTS

--- Additional comment from Igor Biryulin on 2015-02-18 14:09:58 EST ---

Sorry! I understood it is problem of Ubuntu.
Try to write to their mainteners.

--- Additional comment from James Carson on 2015-03-06 15:50:32 EST ---

I have a similar problem. I upgraded to 3.6.2. As you can see from below, I
have a volume named "james_test", but when I try to get heal info on that
volume it tells me the volume does not exist. 

Note: heal statistics does work

[root at appdev0 glusterfs-3.6.2]# gluster volume heal james_test info
Volume james_test does not exist
Volume heal failed


[root at appdev0 glusterfs-3.6.2]# gluster volume info

Volume Name: james_test
Type: Replicate
Volume ID: 044ca3d6-1a89-49a1-b563-b1a2f6d15900
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: appdev0:/export/james_test
Brick2: appdev1:/export/james_test
Brick3: hpbxdev:/export/james_test

[root at appdev0 glusterfs-3.6.2]# gluster volume heal james_test statistics
Gathering crawl statistics on volume james_test has been successful 
------------------------------------------------

Crawl statistics for brick no 0
Hostname of brick appdev0
....

--- Additional comment from  on 2015-05-20 08:18:23 EDT ---

Just installed GlusterFS 3.6.2 from Ubuntu PPA
(http://ppa.launchpad.net/gluster/glusterfs-3.6/ubuntu) on Ubuntu trusty 14.04,
experiencing the same issue. Readline installed, glfsheal installed under
/usr/sbin; 6x2 replicated distributed volume, started.

--- Additional comment from Roger Lehmann on 2015-06-10 10:30:34 EDT ---

Same problem here updating from 3.6.1 to 3.6.3 on one of my three cluster
nodes. Now I'm afraid to update the other ones. Using Debian Wheezy.

--- Additional comment from Niels de Vos on 2016-06-17 12:23:41 EDT ---

This bug is getting closed because the 3.5 is marked End-Of-Life. There will be
no further updates to this version. Please open a new bug against a version
that still receives bugfixes if you are still facing this issue in a more
current release.


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1113778
[Bug 1113778] gluster volume heal info keep reports "Volume heal failed"
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list