[Gluster-users] How to recovery a replicate volume

Atin Mukherjee amukherj at redhat.com
Mon Mar 7 03:49:37 UTC 2016



On 03/07/2016 07:40 AM, songxin wrote:
> Hi all,
> I have a problem about how to recovery a replicate volume.
> 
> precondition:
> glusterfs version:3.7.6
> brick of A board :128.224.95.140:/data/brick/gv0
> brick of B board:128.224.162.255:/data/brick/gv0
> 
> reproduce:
> 1.gluster peer probe 128.224.162.255                                    
>                                                                        
>                                     (on A board)
> 2.gluster volume create gv0 replica 2 128.224.95.140:/data/brick/gv0
> 128.224.162.255:/data/brick/gv0 force                                 
> (on A board)
> 3.gluster volume start gv0                                              
>                                                                        
>                                              (on A board)
> 4.reboot the B board
> 
> After B board reboot,sometimes I have problems as below.
> 1.the peer status some times is rejected when I run "gluster peer
> status".                                  
This is where you get into the problem. I am really not sure what
happens when you reboot a board. In our earlier conversation w.r.t to a
similar problem you did mention that board reboot doesn't wipe of
/var/lib/glusterd, please double confirm!

Also please send cmd_history.log along with glusterd log from both the
nodes. Also post reboot are you also trying to detach/probe A? If so
before detaching was A & B were in cluster connected state?

>                         (on A or B board)          
> 2.The brick on B board sometimes is offline When I run "gluster volume
> status"                                                                 
>              (on A or B board) 
> 
> I want to know how I should do to recovery my replicate volume. 
> 
> PS.
> Now I do following operation to recovery my replicate volume.But
> sometimes I can't sync all the files in replicate volume even if I run
> "heal full".
> 1.gluster volume remove-brick gv0 replica 1
> 128.224.162.255:/data/brick/gv0 force                                  
>                                          (on A board)
> 2. gluster peer detach 128.224.162.255                                  
>                                                                        
>                                      (on A board)
> 3.gluster peer probe 128.224.162.255                                    
>                                                                        
>                                      (on A board)
> 4.gluster volume add-brick gv0 replica 2 128.224.162.255:/data/brick/gv0
> force                                                                   
>               (on A board)            
> 
> 
> 
> Please help me.
> 
> Thanks,
> Xin
> 
> 
>  
> 
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


More information about the Gluster-users mailing list