[Gluster-users] Replace corrupted brick
Freer, Eva B.
freereb at ornl.gov
Tue Sep 22 21:18:28 UTC 2015
Our configuration is a distributed, replicated volume with 7 pairs of bricks on 2 servers. We are in the process of adding additional storage for another brick pair. I placed the new disks in one of the servers late last week and used the LSI storcli command to make a RAID 6 volume of the new disks. We are running RedHat 6.6 and Gluster 3.7.1 on both servers. Yesterday, I ran 'parted /dev/sdj' to create a partition on the new volume. Unfortunately, /dev/sdj was not the new volume (which is /dev/sdh). I realized the error right away, but the system was operating OK and it was late at night so I decided to wait until today to try to fix this. This morning, I ran 'parted rescue 0 36.0TB'. This runs, but does not find a partition to restore. I am using LVM, and the partition is /dev/mapper/vg_data5-lv_data5 with an xfs filesystem on it. The system continued to operate, but I expected that there would be problems on re-boot. I re-booted and indeed, the system can't find the volume at /dev/mapper/vg_data5-lv_data5. Is it possible to recover this volume in place, or do I need to just drop it from the gluster volume, recreate the lvm partition, and then copy the files from its partner brick on the other server? If I need to copy the files, what is the best procedure for doing it?
TIA,
Eva Freer
Oak Ridge National Laboratory
freereb at ornl.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150922/eb921c40/attachment.html>
More information about the Gluster-users
mailing list