[Gluster-users] Can't heal a volume: "Please check if all brick processes are running."

Anatoliy Dmytriyev tolid at tolid.eu.org
Tue Mar 13 14:23:25 UTC 2018


Hi,


Maybe someone can point me to a documentation or explain this? I can't 
find it myself.
Do we have any other useful resources except doc.gluster.org? As I see 
many gluster options are not described there or there are no explanation 
what is doing...


On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
> Hello,
> 
> We have a very fresh gluster 3.10.10 installation.
> Our volume is created as distributed volume, 9 bricks 96TB in total
> (87TB after 10% of gluster disk space reservation)
> 
> For some reasons I can’t “heal” the volume:
> # gluster volume heal gv0
> Launching heal operation to perform index self heal on volume gv0 has
> been unsuccessful on bricks that are down. Please check if all brick
> processes are running.
> 
> Which processes should be run on every brick for heal operation?
> 
> # gluster volume status
> Status of volume: gv0
> Gluster process                             TCP Port  RDMA Port  Online 
>  Pid
> ------------------------------------------------------------------------------
> Brick cn01-ib:/gfs/gv0/brick1/brick         0         49152      Y      
>  70850
> Brick cn02-ib:/gfs/gv0/brick1/brick         0         49152      Y      
>  102951
> Brick cn03-ib:/gfs/gv0/brick1/brick         0         49152      Y      
>  57535
> Brick cn04-ib:/gfs/gv0/brick1/brick         0         49152      Y      
>  56676
> Brick cn05-ib:/gfs/gv0/brick1/brick         0         49152      Y      
>  56880
> Brick cn06-ib:/gfs/gv0/brick1/brick         0         49152      Y      
>  56889
> Brick cn07-ib:/gfs/gv0/brick1/brick         0         49152      Y      
>  56902
> Brick cn08-ib:/gfs/gv0/brick1/brick         0         49152      Y      
>  94920
> Brick cn09-ib:/gfs/gv0/brick1/brick         0         49152      Y      
>  56542
> 
> Task Status of Volume gv0
> ------------------------------------------------------------------------------
> There are no active volume tasks
> 
> 
> # gluster volume info gv0
> Volume Name: gv0
> Type: Distribute
> Volume ID: 8becaf78-cf2d-4991-93bf-f2446688154f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 9
> Transport-type: rdma
> Bricks:
> Brick1: cn01-ib:/gfs/gv0/brick1/brick
> Brick2: cn02-ib:/gfs/gv0/brick1/brick
> Brick3: cn03-ib:/gfs/gv0/brick1/brick
> Brick4: cn04-ib:/gfs/gv0/brick1/brick
> Brick5: cn05-ib:/gfs/gv0/brick1/brick
> Brick6: cn06-ib:/gfs/gv0/brick1/brick
> Brick7: cn07-ib:/gfs/gv0/brick1/brick
> Brick8: cn08-ib:/gfs/gv0/brick1/brick
> Brick9: cn09-ib:/gfs/gv0/brick1/brick
> Options Reconfigured:
> client.event-threads: 8
> performance.parallel-readdir: on
> performance.readdir-ahead: on
> cluster.nufa: on
> nfs.disable: on

-- 
Best regards,
Anatoliy


More information about the Gluster-users mailing list