<div dir="ltr"><div><div><div><div><div><div>Hi Marcus,<br><br></div>There are no issues with geo-rep and disperse volumes. It works with disperse volume<br></div>being master or slave or both. You can run replicated distributed at master and diperse distributed<br></div>at slave or disperse distributed at both master and slave. There was an issue with lookup on / taking<br></div>longer time because of eager locks in disperse and that's been fixed. Which version are you running?<br><br></div>Thanks,<br></div>Kotresh HR<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Mar 2, 2018 at 3:05 PM, Marcus Pedersén <span dir="ltr"><<a href="mailto:marcus.pedersen@slu.se" target="_blank">marcus.pedersen@slu.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi again,<br>
I have been testing and reading up on other solutions<br>
and just wanted to check if my ideas are ok.<br>
I have been looking at dispersed volumes and wonder if there are any<br>
problems running replicated-distributed cluster on the master node and<br>
a dispersed-distributed cluster on the slave side of a geo-replication.<br>
Second thought, running disperesed on both sides, is that a problem<br>
(Master: dispersed-distributed, slave: dispersed-distributed)?<br>
<br>
Many thanks in advance!<br>
<br>
Best regards<br>
<span class="HOEnZb"><font color="#888888">Marcus<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
On Thu, Feb 08, 2018 at 02:57:48PM +0530, Kotresh Hiremath Ravishankar wrote:<br>
> Answers inline<br>
><br>
> On Thu, Feb 8, 2018 at 1:26 PM, Marcus Pedersén <<a href="mailto:marcus.pedersen@slu.se">marcus.pedersen@slu.se</a>><br>
> wrote:<br>
><br>
> > Thank you, Kotresh<br>
> ><br>
> > I talked to your storage colleagues at Open Source Summit in Prag last<br>
> > year.<br>
> > I described my layout idea for them and they said it was a good solution.<br>
> > Sorry if I mail you in private, but I see this as your internal matters.<br>
> ><br>
> > The reason that I seem stressed is that I have already placed my order<br>
> > on new file servers for this so I need to change that as soon as possible.<br>
> ><br>
> > So, a last double check with you:<br>
> > If I build the master cluster as I thought from the beginning,<br>
> > distributed/replicated (replica 3 arbiter 1) and in total 4 file servers<br>
> > and one arbiter (same arbiter used for both "pairs"),<br>
> > and build the slave cluster the same, distributed/replicated (replica 3<br>
> > arbiter 1)<br>
> > and in total 4 file servers and one arbiter (same arbiter used for both<br>
> > "pairs").<br>
> > Do I get a good technical solution?<br>
> ><br>
><br>
> Yes, that works fine.<br>
><br>
> ><br>
> > I liked your description on how the sync works, that made me understand<br>
> > much<br>
> > better how the system works!<br>
> ><br>
><br>
> Thank you very much for all your help!<br>
> ><br>
><br>
> No problem. We are happy to help you.<br>
><br>
> ><br>
> > Best regards<br>
> > Marcus<br>
> ><br>
> ><br>
> > On Wed, Feb 07, 2018 at 09:40:32PM +0530, Kotresh Hiremath Ravishankar<br>
> > wrote:<br>
> > > Answers inline<br>
> > ><br>
> > > On Wed, Feb 7, 2018 at 8:44 PM, Marcus Pedersén <<a href="mailto:marcus.pedersen@slu.se">marcus.pedersen@slu.se</a>><br>
> > > wrote:<br>
> > ><br>
> > > > Thank you for your help!<br>
> > > > Just to make things clear to me (and get a better understanding of<br>
> > > > gluster):<br>
> > > > So, if I make the slave cluster just distributed and node 1 goes down,<br>
> > > > data (say file.txt) that belongs to node 1 will not be synced.<br>
> > > > When node 1 comes back up does the master not realize that file.txt<br>
> > has not<br>
> > > > been synced and makes sure that it is synced when it has contact with<br>
> > node<br>
> > > > 1 again?<br>
> > > > So file.txt will not exist on node 1 at all?<br>
> > > ><br>
> > ><br>
> > > Geo-replication syncs changes based on changelog journal which records<br>
> > all<br>
> > > the file operations.<br>
> > > It syncs every file in two steps<br>
> > > 1. File creation with same attributes as on master via rpc (CREATE is<br>
> > > recorded in changelog)<br>
> > > 2. Data sync via rsync (DATA is recorded in changelog. Any further<br>
> > appends<br>
> > > will only record DATA)<br>
> > ><br>
> > > The changelog processing will not halt on encountering ENOENT(It thinks<br>
> > > it's a safe error). It's not<br>
> > > straight forward. When I said, file won't be synced, it means the file is<br>
> > > created on node1 and when<br>
> > > you append the data, the data would not sync as it gets ENOENT since the<br>
> > > node1 is down. But if the<br>
> > > 'CREATE' of file is not synced to node1, then it is persistent failure<br>
> > > (ENOTCON) and waits till node1 comes back.<br>
> > ><br>
> > > ><br>
> > > > I did a small test on my testing machines.<br>
> > > > Turned one of the geo machines off and created 10000 files containing<br>
> > one<br>
> > > > short string in the master nodes.<br>
> > > > Nothing became synced with the geo slaves.<br>
> > > > When I turned on the geo machine again all 10000 files were synced to<br>
> > the<br>
> > > > geo slaves.<br>
> > > > Ofcause devided between the two machines.<br>
> > > > Is this the right/expected behavior of geo-replication with a<br>
> > distributed<br>
> > > > cluster?<br>
> > > ><br>
> > ><br>
> > > Yes, it's correct. As I said earlier, CREATE itself would have failed<br>
> > with<br>
> > > ENOTCON. geo-rep waited till slave comes back.<br>
> > > Bring slave node down, and now append data to files which falls under<br>
> > node<br>
> > > which is down, you won't see appended data.<br>
> > > So it's always recommended to use replica/ec/arbiter<br>
> > ><br>
> > > ><br>
> > > > Many thanks in advance!<br>
> > > ><br>
> > > > Regards<br>
> > > > Marcus<br>
> > > ><br>
> > > ><br>
> > > > On Wed, Feb 07, 2018 at 06:39:20PM +0530, Kotresh Hiremath Ravishankar<br>
> > > > wrote:<br>
> > > > > We are happy to help you out. Please find the answers inline.<br>
> > > > ><br>
> > > > > On Tue, Feb 6, 2018 at 4:39 PM, Marcus Pedersén <<br>
> > <a href="mailto:marcus.pedersen@slu.se">marcus.pedersen@slu.se</a>><br>
> > > > > wrote:<br>
> > > > ><br>
> > > > > > Hi all,<br>
> > > > > ><br>
> > > > > > I am planning my new gluster system and tested things out in<br>
> > > > > > a bunch of virtual machines.<br>
> > > > > > I need a bit of help to understand how geo-replication behaves.<br>
> > > > > ><br>
> > > > > > I have a master gluster cluster replica 2<br>
> > > > > > (in production I will use an arbiter and replicatied/distributed)<br>
> > > > > > and the geo cluster is distributed with 2 machines.<br>
> > > > > > (in production I will have the geo cluster distributed)<br>
> > > > > ><br>
> > > > ><br>
> > > > > It's recommended to use slave also to be distribute<br>
> > > > replicate/aribiter/ec.<br>
> > > > > Choosing only distribute will cause issues<br>
> > > > > when of the slave node is down and a file is being synced which<br>
> > belongs<br>
> > > > to<br>
> > > > > that node. It would not sync<br>
> > > > > later.<br>
> > > > ><br>
> > > > ><br>
> > > > > > Everything is up and running and creating files from client both<br>
> > > > > > replicates and is distributed in the geo cluster.<br>
> > > > > ><br>
> > > > > > The thing I am wondering about is:<br>
> > > > > > When I run: gluster volume geo-replication status<br>
> > > > > > I see both slave nodes one is active and the other is passive.<br>
> > > > > ><br>
> > > > > > MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE<br>
> > > > > > SLAVE NODE STATUS CRAWL<br>
> > STATUS<br>
> > > > > > LAST_SYNCED<br>
> > > > > > ------------------------------<wbr>------------------------------<br>
> > > > > > ------------------------------<wbr>------------------------------<br>
> > > > > > ------------------------------<wbr>---------------------<br>
> > > > > > gluster1 interbullfs /interbullfs geouser<br>
> > > > > > ssh://geouser@gluster-geo1::<wbr>interbullfs-geo gluster-geo2<br>
> > Active<br>
> > > > > > Changelog Crawl 2018-02-06 11:46:08<br>
> > > > > > gluster2 interbullfs /interbullfs geouser<br>
> > > > > > ssh://geouser@gluster-geo1::<wbr>interbullfs-geo gluster-geo1<br>
> > > > Passive<br>
> > > > > > N/A N/A<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > If I shutdown the active slave the status changes to faulty<br>
> > > > > > and the other one continues to be passive.<br>
> > > > > ><br>
> > > > ><br>
> > > > > > MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE<br>
> > > > > > SLAVE NODE STATUS CRAWL<br>
> > STATUS<br>
> > > > > > LAST_SYNCED<br>
> > > > > > ------------------------------<wbr>------------------------------<br>
> > > > > > ------------------------------<wbr>------------------------------<br>
> > > > > > ------------------------------<wbr>----------<br>
> > > > > > gluster1 interbullfs /interbullfs geouser<br>
> > > > > > ssh://geouser@gluster-geo1::<wbr>interbullfs-geo N/A<br>
> > Faulty<br>
> > > > > > N/A N/A<br>
> > > > > > gluster2 interbullfs /interbullfs geouser<br>
> > > > > > ssh://geouser@gluster-geo1::<wbr>interbullfs-geo gluster-geo1<br>
> > > > Passive<br>
> > > > > > N/A N/A<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > In my understanding I thought that if the active slave stopped<br>
> > > > > > working the passive slave should become active and should<br>
> > > > > > continue to replicate from master.<br>
> > > > > ><br>
> > > > > > Am I wrong? Is there just one active slave if it is setup as<br>
> > > > > > a distributed system?<br>
> > > > > ><br>
> > > > ><br>
> > > > > The Active/Passive notion is for master node. If gluster1 master<br>
> > node is<br>
> > > > > down glusterd2 master node will become Active.<br>
> > > > > It's not for slave node.<br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > > ><br>
> > > > > > What I use:<br>
> > > > > > Centos 7, gluster 3.12<br>
> > > > > > I have followed the geo instructions:<br>
> > > > > > <a href="http://docs.gluster.org/en/latest/Administrator%20Guide/Geo%" rel="noreferrer" target="_blank">http://docs.gluster.org/en/<wbr>latest/Administrator%20Guide/<wbr>Geo%</a><br>
> > > > 20Replication/<br>
> > > > > ><br>
> > > > > > Many thanks in advance!<br>
> > > > > ><br>
> > > > > > Bets regards<br>
> > > > > > Marcus<br>
> > > > > ><br>
> > > > > > --<br>
> > > > > > ******************************<wbr>********************<br>
> > > > > > * Marcus Pedersén *<br>
> > > > > > * System administrator *<br>
> > > > > > ******************************<wbr>********************<br>
> > > > > > * Interbull Centre *<br>
> > > > > > * ================ *<br>
> > > > > > * Department of Animal Breeding & Genetics — SLU *<br>
> > > > > > * Box 7023, SE-750 07 *<br>
> > > > > > * Uppsala, Sweden *<br>
> > > > > > ******************************<wbr>********************<br>
> > > > > > * Visiting address: *<br>
> > > > > > * Room 55614, Ulls väg 26, Ultuna *<br>
> > > > > > * Uppsala *<br>
> > > > > > * Sweden *<br>
> > > > > > * *<br>
> > > > > > * Tel: +46-(0)18-67 1962 *<br>
> > > > > > * *<br>
> > > > > > ******************************<wbr>********************<br>
> > > > > > * ISO 9001 Bureau Veritas No SE004561-1 *<br>
> > > > > > ******************************<wbr>********************<br>
> > > > > > ______________________________<wbr>_________________<br>
> > > > > > Gluster-users mailing list<br>
> > > > > > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> > > > > > <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > > --<br>
> > > > > Thanks and Regards,<br>
> > > > > Kotresh H R<br>
> > > ><br>
> > > > --<br>
> > > > ******************************<wbr>********************<br>
> > > > * Marcus Pedersén *<br>
> > > > * System administrator *<br>
> > > > ******************************<wbr>********************<br>
> > > > * Interbull Centre *<br>
> > > > * ================ *<br>
> > > > * Department of Animal Breeding & Genetics — SLU *<br>
> > > > * Box 7023, SE-750 07 *<br>
> > > > * Uppsala, Sweden *<br>
> > > > ******************************<wbr>********************<br>
> > > > * Visiting address: *<br>
> > > > * Room 55614, Ulls väg 26, Ultuna *<br>
> > > > * Uppsala *<br>
> > > > * Sweden *<br>
> > > > * *<br>
> > > > * Tel: +46-(0)18-67 1962 *<br>
> > > > * *<br>
> > > > ******************************<wbr>********************<br>
> > > > * ISO 9001 Bureau Veritas No SE004561-1 *<br>
> > > > ******************************<wbr>********************<br>
> > > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > --<br>
> > > Thanks and Regards,<br>
> > > Kotresh H R<br>
> ><br>
> > --<br>
> > ******************************<wbr>********************<br>
> > * Marcus Pedersén *<br>
> > * System administrator *<br>
> > ******************************<wbr>********************<br>
> > * Interbull Centre *<br>
> > * ================ *<br>
> > * Department of Animal Breeding & Genetics — SLU *<br>
> > * Box 7023, SE-750 07 *<br>
> > * Uppsala, Sweden *<br>
> > ******************************<wbr>********************<br>
> > * Visiting address: *<br>
> > * Room 55614, Ulls väg 26, Ultuna *<br>
> > * Uppsala *<br>
> > * Sweden *<br>
> > * *<br>
> > * Tel: +46-(0)18-67 1962 *<br>
> > * *<br>
> > ******************************<wbr>********************<br>
> > * ISO 9001 Bureau Veritas No SE004561-1 *<br>
> > ******************************<wbr>********************<br>
> ><br>
><br>
><br>
><br>
> --<br>
> Thanks and Regards,<br>
> Kotresh H R<br>
<br>
--<br>
******************************<wbr>********************<br>
* Marcus Pedersén *<br>
* System administrator *<br>
******************************<wbr>********************<br>
* Interbull Centre *<br>
* ================ *<br>
* Department of Animal Breeding & Genetics — SLU *<br>
* Box 7023, SE-750 07 *<br>
* Uppsala, Sweden *<br>
******************************<wbr>********************<br>
* Visiting address: *<br>
* Room 55614, Ulls väg 26, Ultuna *<br>
* Uppsala *<br>
* Sweden *<br>
* *<br>
* Tel: +46-(0)18-67 1962 *<br>
* *<br>
******************************<wbr>********************<br>
* ISO 9001 Bureau Veritas No SE004561-1 *<br>
******************************<wbr>********************<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>Thanks and Regards,<br></div>Kotresh H R<br></div></div>
</div>