[Gluster-devel] 3.6.2 volume heal

Pranith Kumar Karampuri pkarampu at redhat.com
Tue Feb 3 07:11:33 UTC 2015


On 02/03/2015 12:13 PM, Raghavendra Bhat wrote:
> On Monday 02 February 2015 09:07 PM, David F. Robinson wrote:
>> I upgraded one of my bricks from 3.6.1 to 3.6.2 and I can no longer 
>> do a 'gluster volume heal homegfs info'.  It hangs and never returns 
>> any information.
>> I was trying to ensure that gfs01a had finished healing before 
>> upgrading the other machines (gfs01b, gfs02a, gfs02b) in my 
>> configuration (see below).
>> 'gluster volume homegfs statistics' still works fine.
>> Do I need to upgrade my other bricks to get the 'gluster volume heal 
>> homegfs info' working?  Or, should I fix this issue before upgrading 
>> my other machines?
>> Volume Name: homegfs
>> Type: Distributed-Replicate
>> Volume ID: 1e32672a-f1b7-4b58-ba94-58c085e59071
>> Status: Started
>> Number of Bricks: 4 x 2 = 8
>> Transport-type: tcp
>> Bricks:
>> Brick1: gfsib01a.corvidtec.com:/data/brick01a/homegfs
>> Brick2: gfsib01b.corvidtec.com:/data/brick01b/homegfs
>> Brick3: gfsib01a.corvidtec.com:/data/brick02a/homegfs
>> Brick4: gfsib01b.corvidtec.com:/data/brick02b/homegfs
>> Brick5: gfsib02a.corvidtec.com:/data/brick01a/homegfs
>> Brick6: gfsib02b.corvidtec.com:/data/brick01b/homegfs
>> Brick7: gfsib02a.corvidtec.com:/data/brick02a/homegfs
>> Brick8: gfsib02b.corvidtec.com:/data/brick02b/homegfs
>> Options Reconfigured:
>> performance.io-thread-count: 32
>> performance.cache-size: 128MB
>> performance.write-behind-window-size: 128MB
>> server.allow-insecure: on
>> network.ping-timeout: 10
>> storage.owner-gid: 100
>> geo-replication.indexing: off
>> geo-replication.ignore-pid-check: on
>> changelog.changelog: on
>> changelog.fsync-interval: 3
>> changelog.rollover-time: 15
>> server.manage-gids: on
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>
> CCing Pranith, the maintainer of replicate. In the meantime can you 
> please provide the logs from the machine where you have upgraded?
Anuradha already followed up with David. Seems like he got out of this 
situation by upgrading the other node and removing some files which went 
into split-brain.
Next steps we are following up with David will be to check if the heal 
info command is run from 3.6.1 nodes or 3.6.2 nodes.

Pranith
>
> Regards,
> Raghavendra Bhat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150203/24b33c35/attachment-0001.html>


More information about the Gluster-devel mailing list