[Gluster-users] delete brick / format / add empty brick

Alun James ajames at tibus.com
Tue Jan 7 16:10:42 UTC 2014


Hi folks, 


I had a 2 node (1 brick each) replica, some network meltdown issues seemed to cause problems with second node (server02). glusterfsd process reaching 200-300% and errors relating to split brain possibilities and self heal errors. 


Original volume info: 


Volume Name: myvol 

Type: Replicate 
Status: Started 
Number of Bricks: 2 
Transport-type: tcp 
Bricks: 
Brick1: server01:/brick1 
Brick2: server02:/brick1 


I removed the second brick (that was showing server problems). 


gluster volume remove-brick myvol replica 1 server02:/brick1 


Now the volume status is: 



Volume Name: tsfsvol0 
Type: Distribute 
Status: Started 
Number of Bricks: 1 
Transport-type: tcp 
Bricks: 
Brick1: server01:/brick1 


All is fine and the data on working server is sound. 


The xfs partition for server02:/brick1 has been formatted and therefore the data gone. All other gluster config data has remained untouched. Can I re-add the second server to the volume with an empty brick and the data will auto replicate over from the working server? 


gluster volume add-brick myvol replica 2 server2:/brick1 ?? 







ALUN JAMES 
Senior Systems Engineer 
Tibus 

T: +44 (0)28 9033 1122 
E: ajames at tibus.com 
W: www.tibus.com 

Follow us on Twitter @tibus 

Tibus is a trading name of The Internet Business Ltd, a company limited by share capital and registered in Northern Ireland, NI31325. It is part of UTV Media Plc. 

This email and any attachment may contain confidential information for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorised to receive for the recipient), please contact the sender by reply email and delete all copies of this message. 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140107/db9f5b5d/attachment.html>


More information about the Gluster-users mailing list