<div dir="ltr">Answers inline<br><div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 7, 2018 at 8:44 PM, Marcus Pedersén <span dir="ltr"><<a href="mailto:marcus.pedersen@slu.se" target="_blank">marcus.pedersen@slu.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Thank you for your help!<br>
Just to make things clear to me (and get a better understanding of gluster):<br>
So, if I make the slave cluster just distributed and node 1 goes down,<br>
data (say file.txt) that belongs to node 1 will not be synced.<br>
When node 1 comes back up does the master not realize that file.txt has not<br>
been synced and makes sure that it is synced when it has contact with node 1 again?<br>
So file.txt will not exist on node 1 at all?<br></blockquote><div><br></div><div>Geo-replication syncs changes based on changelog journal which records all the file operations.</div><div>It syncs every file in two steps</div><div>1. File creation with same attributes as on master via rpc (CREATE is recorded in changelog)<br></div><div>2. Data sync via rsync (DATA is recorded in changelog. Any further appends will only record DATA)<br></div><div><br></div><div>The changelog processing will not halt on encountering ENOENT(It thinks it's a safe error). It's not</div><div>straight forward. When I said, file won't be synced, it means the file is created on node1 and when</div><div>you append the data, the data would not sync as it gets ENOENT since the node1 is down. But if the</div><div>'CREATE' of file is not synced to node1, then it is persistent failure (ENOTCON) and waits till node1 comes back.<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
I did a small test on my testing machines.<br>
Turned one of the geo machines off and created 10000 files containing one<br>
short string in the master nodes.<br>
Nothing became synced with the geo slaves.<br>
When I turned on the geo machine again all 10000 files were synced to the geo slaves.<br>
Ofcause devided between the two machines.<br>
Is this the right/expected behavior of geo-replication with a distributed cluster?<br></blockquote><div><br></div><div>Yes, it's correct. As I said earlier, CREATE itself would have failed with ENOTCON. geo-rep waited till slave comes back.<br></div><div>Bring slave node down, and now append data to files which falls under node which is down, you won't see appended data.</div><div>So it's always recommended to use replica/ec/arbiter<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Many thanks in advance!<br>
<br>
Regards<br>
<span class="m_4211132878864299915HOEnZb"><font color="#888888">Marcus<br>
</font></span><div class="m_4211132878864299915HOEnZb"><div class="m_4211132878864299915h5"><br>
<br>
On Wed, Feb 07, 2018 at 06:39:20PM +0530, Kotresh Hiremath Ravishankar wrote:<br>
> We are happy to help you out. Please find the answers inline.<br>
><br>
> On Tue, Feb 6, 2018 at 4:39 PM, Marcus Pedersén <<a href="mailto:marcus.pedersen@slu.se" target="_blank">marcus.pedersen@slu.se</a>><br>
> wrote:<br>
><br>
> > Hi all,<br>
> ><br>
> > I am planning my new gluster system and tested things out in<br>
> > a bunch of virtual machines.<br>
> > I need a bit of help to understand how geo-replication behaves.<br>
> ><br>
> > I have a master gluster cluster replica 2<br>
> > (in production I will use an arbiter and replicatied/distributed)<br>
> > and the geo cluster is distributed with 2 machines.<br>
> > (in production I will have the geo cluster distributed)<br>
> ><br>
><br>
> It's recommended to use slave also to be distribute replicate/aribiter/ec.<br>
> Choosing only distribute will cause issues<br>
> when of the slave node is down and a file is being synced which belongs to<br>
> that node. It would not sync<br>
> later.<br>
><br>
><br>
> > Everything is up and running and creating files from client both<br>
> > replicates and is distributed in the geo cluster.<br>
> ><br>
> > The thing I am wondering about is:<br>
> > When I run: gluster volume geo-replication status<br>
> > I see both slave nodes one is active and the other is passive.<br>
> ><br>
> > MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE<br>
> > SLAVE NODE STATUS CRAWL STATUS<br>
> > LAST_SYNCED<br>
> > ------------------------------<wbr>------------------------------<br>
> > ------------------------------<wbr>------------------------------<br>
> > ------------------------------<wbr>---------------------<br>
> > gluster1 interbullfs /interbullfs geouser<br>
> > ssh://geouser@gluster-geo1::in<wbr>terbullfs-geo gluster-geo2 Active<br>
> > Changelog Crawl 2018-02-06 11:46:08<br>
> > gluster2 interbullfs /interbullfs geouser<br>
> > ssh://geouser@gluster-geo1::in<wbr>terbullfs-geo gluster-geo1 Passive<br>
> > N/A N/A<br>
> ><br>
> ><br>
> > If I shutdown the active slave the status changes to faulty<br>
> > and the other one continues to be passive.<br>
> ><br>
><br>
> > MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE<br>
> > SLAVE NODE STATUS CRAWL STATUS<br>
> > LAST_SYNCED<br>
> > ------------------------------<wbr>------------------------------<br>
> > ------------------------------<wbr>------------------------------<br>
> > ------------------------------<wbr>----------<br>
> > gluster1 interbullfs /interbullfs geouser<br>
> > ssh://geouser@gluster-geo1::in<wbr>terbullfs-geo N/A Faulty<br>
> > N/A N/A<br>
> > gluster2 interbullfs /interbullfs geouser<br>
> > ssh://geouser@gluster-geo1::in<wbr>terbullfs-geo gluster-geo1 Passive<br>
> > N/A N/A<br>
> ><br>
> ><br>
> > In my understanding I thought that if the active slave stopped<br>
> > working the passive slave should become active and should<br>
> > continue to replicate from master.<br>
> ><br>
> > Am I wrong? Is there just one active slave if it is setup as<br>
> > a distributed system?<br>
> ><br>
><br>
> The Active/Passive notion is for master node. If gluster1 master node is<br>
> down glusterd2 master node will become Active.<br>
> It's not for slave node.<br>
><br>
><br>
><br>
> ><br>
> > What I use:<br>
> > Centos 7, gluster 3.12<br>
> > I have followed the geo instructions:<br>
> > <a href="http://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/" rel="noreferrer" target="_blank">http://docs.gluster.org/en/lat<wbr>est/Administrator%20Guide/Geo%<wbr>20Replication/</a><br>
> ><br>
> > Many thanks in advance!<br>
> ><br>
> > Bets regards<br>
> > Marcus<br>
> ><br>
> > --<br>
> > ******************************<wbr>********************<br>
> > * Marcus Pedersén *<br>
> > * System administrator *<br>
> > ******************************<wbr>********************<br>
> > * Interbull Centre *<br>
> > * ================ *<br>
> > * Department of Animal Breeding & Genetics — SLU *<br>
> > * Box 7023, SE-750 07 *<br>
> > * Uppsala, Sweden *<br>
> > ******************************<wbr>********************<br>
> > * Visiting address: *<br>
> > * Room 55614, Ulls väg 26, Ultuna *<br>
> > * Uppsala *<br>
> > * Sweden *<br>
> > * *<br>
> > * Tel: +46-(0)18-67 1962 *<br>
> > * *<br>
> > ******************************<wbr>********************<br>
> > * ISO 9001 Bureau Veritas No SE004561-1 *<br>
> > ******************************<wbr>********************<br>
> > ______________________________<wbr>_________________<br>
> > Gluster-users mailing list<br>
> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> > <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
><br>
><br>
><br>
><br>
> --<br>
> Thanks and Regards,<br>
> Kotresh H R<br>
<br>
--<br>
******************************<wbr>********************<br>
* Marcus Pedersén *<br>
* System administrator *<br>
******************************<wbr>********************<br>
* Interbull Centre *<br>
* ================ *<br>
* Department of Animal Breeding & Genetics — SLU *<br>
* Box 7023, SE-750 07 *<br>
* Uppsala, Sweden *<br>
******************************<wbr>********************<br>
* Visiting address: *<br>
* Room 55614, Ulls väg 26, Ultuna *<br>
* Uppsala *<br>
* Sweden *<br>
* *<br>
* Tel: +46-(0)18-67 1962 *<br>
* *<br>
******************************<wbr>********************<br>
* ISO 9001 Bureau Veritas No SE004561-1 *<br>
******************************<wbr>********************<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="m_4211132878864299915gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>Thanks and Regards,<br></div>Kotresh H R<br></div></div>
</div></div></div>