<div dir="ltr">Ok, it's been a while, but I'm getting back to this "project".<br>I was unable to get gluster for the platform: the machines are ARM-based, and there are no ARM binaries on the gluster package repo. I tried building it instead, but the version of gluster I was running was quite old, and I couldn't get all the right package versions to do a successful build. <div>As a result, it sounds like my best option is to follow your alternate suggestion: <br>"The other option is to setup a new cluster and volume and then mount the volume via FUSE and copy the data from one of the bricks."<br><br>I want to be sure I understand what you're saying, though. Here's my plan:<br>create 3 VMs on amd64 processors(*)<br>Give each a 100G brick</div><div>set up the 3 bricks as disperse</div><div>mount the new gluster volume on my workstation</div><div>copy directories from one of the old bricks to the mounted new GFS volume</div><div>Copy fully restored data from new GFS volume to workstation or whatever permanent setup I go with.</div><div><br></div><div>Is that right? Or do I want the GFS system to be offline while I copy the contents of the old brick to the new brick?</div><div><br>(*) I'm not planning to keep my GFS on VMs on cloud, I just want something temporary to work with so I don't blow up anything else.<br><br><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, 12 Aug 2023 at 09:20, Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>
If you preserved the gluster structure in /etc/ and /var/lib, you should be able to run the cluster again.<div>First install the same gluster version all nodes and then overwrite the structure in /etc and in /var/lib.</div><div>Once you mount the bricks , start glusterd and check the situation.</div><div><br></div><div>The other option is to setup a new cluster and volume and then mount the volume via FUSE and copy the data from one of the bricks.</div><div><br></div><div>Best Regards,</div><div>Strahil Nikolov</div><div><br><p style="font-size:15px;color:rgb(113,95,250);padding-top:15px;margin-top:0px">On Saturday, August 12, 2023, 7:46 AM, Richard Betel <<a href="mailto:emteeoh@gmail.com" target="_blank">emteeoh@gmail.com</a>> wrote:</p><blockquote><div id="m_5558377792399278349yiv3063727767"><div dir="ltr">I had a small cluster with a disperse 3 volume. 2 nodes had hardware failures and no longer boot, and I don't have replacement hardware for them (it's an old board called a PC-duino). However, I do have their intact root filesystems and the disks the bricks are on. <br><br>So I need to rebuild the cluster on all new host hardware. does anyone have any suggestions on how to go about doing this? I've built 3 vms to be a new test cluster, but if I copy over a file from the 3 nodes and try to read it, I can't and get errors in /var/log/glusterfs/foo.log:<br>[2023-08-12 03:50:47.638134 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:2561:client4_0_lookup_cbk] 0-gv-client-0: remote operation failed. [{path=/helmetpart.scad}, {gfid=00000000-0000-0000-0000-000000000000}<br>, {errno=61}, {error=No data available}]<br>[2023-08-12 03:50:49.834859 +0000] E [MSGID: 122066] [ec-common.c:1301:ec_prepare_update_cbk] 0-gv-disperse-0: Unable to get config xattr. FOP : 'FXATTROP' failed on gfid 076a511d-3721-4231-ba3b-5c4cbdbd7f5d. Pa<br>rent FOP: READ [No data available]<br>[2023-08-12 03:50:49.834930 +0000] W [fuse-bridge.c:2994:fuse_readv_cbk] 0-glusterfs-fuse: 39: READ => -1 gfid=076a511d-3721-4231-ba3b-5c4cbdbd7f5d fd=0x7fbc9c001a98 (No data available)<br><br>so obviously, I need to copy over more stuff from the original cluster. If I force the 3 nodes and the volume to have the same uuids, will that be enough?</div>
</div>________<br><br><br><br>Community Meeting Calendar:<br><br>Schedule -<br>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br><blockquote></blockquote></blockquote></div>
</div></blockquote></div>