[Gluster-users] Strange file corruption
Udo Giacomozzi
udo.giacomozzi at indunet.it
Wed Dec 9 17:15:35 UTC 2015
Am 09.12.2015 um 17:17 schrieb Joe Julian:
>> A-1) shut down node #1 (the first that is about to be upgraded)
>> A-2) remove node #1 from the Proxmox cluster (/pvevm delnode "metal1"/)
>> A-3) remove node #1 from the Gluster volume/cluster (/gluster volume
>> remove-brick ... && gluster peer detach "metal1"/)
>> A-4) install Debian Jessie on node #1, overwriting all data on the
>> HDD -*with same Network settings and hostname as before*
>> A-5)install Proxmox 4.0
>> <https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie>on
>> node #1
>> A-6) install Gluster on node #1 and add it back to the Gluster volume
>> (/gluster volume add-brick .../) => shared storage will be complete
>> again (spanning 3.4 and 4.0 nodes)
>> A-7) configure the Gluster volume as shared storage in Proxmox 4
>> (node #1)
>> A-8) configure the external Backup storage on node #1 (Proxmox 4)
>
> Was the data on the gluster brick deleted as part of step 4?
Yes, all data on physical HDD was deleted (reformatted / repartitioned).
> When you remove the brick, gluster will no longer track pending
> changes for that brick. If you add it back in with stale data but
> matching gfids, you would have two clean bricks with mismatching data.
> Did you have to use "add-brick...force"?
No, "force" was not necessary and the added directory
"/data/gluster/systems" did not exist.
This were the commands executed on node #2 during step 6:
gluster volume add-brick "systems" replica 3
metal1:/data/gluster/systems
gluster volume heal "systems" full # to trigger sync
Then I waited for replication to finish before doing anything else
(about 1 hour or maybe more), checking _gluster volume heal "systems" info_
Udo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151209/83a685d3/attachment.html>
More information about the Gluster-users
mailing list