<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Sep 20, 2018 at 1:29 AM, Raghavendra Gowdappa <span dir="ltr"><<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Can you give volume info? Looks like you are using 2 way replica.<br></div></blockquote><div><br></div><div>Yes indeed.<br>
gluster volume create gvol0 replica 2 gfs01:/glusterdata/brick1/<wbr>gvol0 gfs02:/glusterdata/brick2/<wbr>gvol0<br><br></div><div>+Pranith. +Ravi.<br><br></div><div>Not sure whether 2 way replication has caused this. From what I understand we need either 3 way replication or arbiter for correct resolution of heals.<br><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Sep 19, 2018 at 9:39 AM, Johan Karlsson <span dir="ltr"><<a href="mailto:Johan.Karlsson@dgc.se" target="_blank">Johan.Karlsson@dgc.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I have two servers setup with glusterFS in replica mode, a single volume exposed via a mountpoint. The servers are running Ubuntu 16.04 LTS<br>
<br>
After a package upgrade + reboot of both servers, it was discovered that the data was completely gone. New data written on the volume via the mountpoint is replicated correctly, and gluster status/info commands states that everything is ok (no split brain scenario or any healing needed etc). But the previous data is completely gone, not even present on any of the bricks.<br>
<br>
The following upgrade was done:<br>
<br>
glusterfs-server:amd64 (4.1.0-ubuntu1~xenial3 -> 4.1.4-ubuntu1~xenial1)<br>
glusterfs-client:amd64 (4.1.0-ubuntu1~xenial3 -> 4.1.4-ubuntu1~xenial1)<br>
glusterfs-common:amd64 (4.1.0-ubuntu1~xenial3 -> 4.1.4-ubuntu1~xenial1)<br>
<br>
The logs only show that connection between the servers was lost, which is expected.<br>
<br>
I can't even determine if it was the package upgrade or the reboot that caused this issue, but I've tried to recreate the issue without success.<br>
<br>
Any idea what could have gone wrong, or if I have done some wrong during the setup. For reference, this is how I've done the setup:<br>
<br>
---<br>
Add a separate disk with a single partition on both servers (/dev/sdb1)<br>
<br>
Add gfs hostnames for direct communication without DNS, on both servers:<br>
<br>
/etc/hosts<br>
<br>
192.168.4.45 gfs01<br>
192.168.4.46 gfs02<br>
<br>
On gfs01, create a new LVM Volume Group:<br>
vgcreate gfs01-vg /dev/sdb1<br>
<br>
And on the gfs02:<br>
vgcreate gfs02-vg /dev/sdb1<br>
<br>
Create logical volumes named "brick" on the servers:<br>
<br>
gfs01:<br>
lvcreate -l 100%VG -n brick1 gfs01-vg<br>
gfs02:<br>
lvcreate -l 100%VG -n brick2 gfs02-vg<br>
<br>
Format the volumes with ext4 filesystem:<br>
<br>
gfs01:<br>
mkfs.ext4 /dev/gfs01-vg/brick1<br>
gfs02:<br>
mkfs.ext4 /dev/gfs02-vg/brick2<br>
<br>
Create a mountpoint for the bricks on the servers:<br>
<br>
gfs01:<br>
mkdir -p /glusterdata/brick1<br>
gds02:<br>
mkdir -p /glusterdata/brick2<br>
<br>
Make a permanent mount on the servers:<br>
<br>
gfs01:<br>
/dev/gfs01-vg/brick1 /glusterdata/brick1 ext4 defaults 0 0<br>
gfs02:<br>
/dev/gfs02-vg/brick2 /glusterdata/brick2 ext4 defaults 0 0<br>
<br>
Mount it:<br>
mount -a<br>
<br>
Create a gluster volume mount point on the bricks on the servers:<br>
<br>
gfs01:<br>
mkdir -p /glusterdata/brick1/gvol0<br>
gfs02:<br>
mkdir -p /glusterdata/brick2/gvol0<br>
<br>
>From each server, peer probe the other one:<br>
<br>
gluster peer probe gfs01<br>
peer probe: success<br>
<br>
gluster peer probe gfs02<br>
peer probe: success<br>
<br>
>From any single server, create the gluster volume as a "replica" with two nodes; gfs01 and gfs02:<br>
<br>
gluster volume create gvol0 replica 2 gfs01:/glusterdata/brick1/gvol<wbr>0 gfs02:/glusterdata/brick2/gvol<wbr>0<br>
<br>
Start the volume:<br>
<br>
gluster volume start gvol0<br>
<br>
On each server, mount the gluster filesystem on the /filestore mount point:<br>
<br>
gfs01:<br>
mount -t glusterfs gfs01:/gvol0 /filestore<br>
gfs02:<br>
mount -t glusterfs gfs02:/gvol0 /filestore<br>
<br>
Make the mount permanent on the servers:<br>
<br>
/etc/fstab<br>
<br>
gfs01:<br>
gfs01:/gvol0 /filestore glusterfs defaults,_netdev 0 0<br>
gfs02:<br>
gfs02:/gvol0 /filestore glusterfs defaults,_netdev 0 0<br>
---<br>
<br>
Regards,<br>
<br>
Johan Karlsson<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mail<wbr>man/listinfo/gluster-users</a><br>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div></div>