<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Dear Gilberto,</p>
    <p><br>
    </p>
    <p>If I am right, you ran into server-quorum if you startet a 2-node
      replica and shutdown one host.</p>
    <p>From my perspective, its fine.</p>
    <p><br>
    </p>
    <p>Please correct me if I am wrong here.</p>
    <p><br>
    </p>
    <p>Regards,</p>
    <p>Felix<br>
    </p>
    <div class="moz-cite-prefix">On 27/10/2020 01:46, Gilberto Nunes
      wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAOKSTBveii-CYR6RMaf4+YAfwAsioTfto2OU0AWeEfk9TrDrpg@mail.gmail.com">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="ltr">Well I do not reboot the host. I shut down the
        host. Then after 15 min give up.
        <div>Don't know why that happened.</div>
        <div>I will try it latter</div>
        <div><br>
        </div>
        <div>
          <div>
            <div dir="ltr" class="gmail_signature"
              data-smartmail="gmail_signature">
              <div dir="ltr">
                <div dir="ltr">
                  <div dir="ltr">
                    <div dir="ltr">
                      <div dir="ltr">
                        <div>---</div>
                        <div>
                          <div>
                            <div>Gilberto Nunes Ferreira</div>
                          </div>
                          <div><br>
                          </div>
                          <div> </div>
                          <div>
                            <p style="font-size:12.8px;margin:0px"><br>
                            </p>
                            <p style="font-size:12.8px;margin:0px"><br>
                            </p>
                          </div>
                        </div>
                        <div><br>
                        </div>
                      </div>
                    </div>
                  </div>
                </div>
              </div>
            </div>
          </div>
          <br>
        </div>
      </div>
      <br>
      <div class="gmail_quote">
        <div dir="ltr" class="gmail_attr">Em seg., 26 de out. de 2020 às
          21:31, Strahil Nikolov &lt;<a
            href="mailto:hunter86_bg@yahoo.com" moz-do-not-send="true">hunter86_bg@yahoo.com</a>&gt;
          escreveu:<br>
        </div>
        <blockquote class="gmail_quote" style="margin:0px 0px 0px
          0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Usually
          there is always only 1 "master" , but when you power off one
          of the 2 nodes - the geo rep should handle that and the second
          node should take the job.<br>
          <br>
          How long did you wait after gluster1 has been rebooted ?<br>
          <br>
          <br>
          Best Regards,<br>
          Strahil Nikolov<br>
          <br>
          <br>
          <br>
          <br>
          <br>
          <br>
          В понеделник, 26 октомври 2020 г., 22:46:21 Гринуич+2,
          Gilberto Nunes &lt;<a href="mailto:gilberto.nunes32@gmail.com"
            target="_blank" moz-do-not-send="true">gilberto.nunes32@gmail.com</a>&gt;
          написа: <br>
          <br>
          <br>
          <br>
          <br>
          <br>
          I was able to solve the issue restarting all servers.<br>
          <br>
          Now I have another issue!<br>
          <br>
          I just powered off the gluster01 server and then the
          geo-replication entered in faulty status.<br>
          I tried to stop and start the gluster geo-replication like
          that:<br>
          <br>
          gluster volume geo-replication DATA root@gluster03::DATA-SLAVE
          resume  Peer gluster01.home.local, which is a part of DATA
          volume, is down. Please bring up the peer and retry.
          geo-replication command failed<br>
          How can I have geo-replication with 2 master and 1 slave?<br>
          <br>
          Thanks<br>
          <br>
          <br>
          ---<br>
          Gilberto Nunes Ferreira<br>
          <br>
          <br>
          <br>
          <br>
          <br>
          <br>
          <br>
          Em seg., 26 de out. de 2020 às 17:23, Gilberto Nunes &lt;<a
            href="mailto:gilberto.nunes32@gmail.com" target="_blank"
            moz-do-not-send="true">gilberto.nunes32@gmail.com</a>&gt;
          escreveu:<br>
          &gt; Hi there...<br>
          &gt; <br>
          &gt; I'd created a 2 gluster vol and another 1 gluster server
          acting as a backup server, using geo-replication.<br>
          &gt; So in gluster01 I'd issued the command:<br>
          &gt; <br>
          &gt; gluster peer probe gluster02;gluster peer probe gluster03<br>
          &gt; gluster vol create DATA replica 2
          gluster01:/DATA/master01-data gluster02:/DATA/master01-data/<br>
          &gt; <br>
          &gt; Then in gluster03 server:<br>
          &gt; <br>
          &gt; gluster vol create DATA-SLAVE gluster03:/DATA/slave-data/<br>
          &gt; <br>
          &gt; I'd setted the ssh powerless session between this 3
          servers.<br>
          &gt; <br>
          &gt; Then I'd used this script<br>
          &gt; <br>
          &gt; <a
            href="https://github.com/gilbertoferreira/georepsetup"
            rel="noreferrer" target="_blank" moz-do-not-send="true">https://github.com/gilbertoferreira/georepsetup</a><br>
          &gt; <br>
          &gt; like this<br>
          &gt; <br>
          &gt; georepsetup
           /usr/local/lib/python2.7/dist-packages/paramiko-2.7.2-py2.7.egg/paramiko/transport.py:33:
          CryptographyDeprecationWarning: Python 2 is no longer
          supported by the Python core team. Support for it is now
          deprecated in cryptography, and will be removed in a future
          release.  from cryptography.hazmat.backends import
          default_backend usage: georepsetup [-h] [--force] [--no-color]
          MASTERVOL SLAVE SLAVEVOL georepsetup: error: too few arguments
          gluster01:~# georepsetup DATA gluster03 DATA-SLAVE
/usr/local/lib/python2.7/dist-packages/paramiko-2.7.2-py2.7.egg/paramiko/transport.py:33:
          CryptographyDeprecationWarning: Python 2 is no longer
          supported by the Python core team. Support for it is now
          deprecated in cryptography, and will be removed in a future
          release.  from cryptography.hazmat.backends import
          default_backend Geo-replication session will be established
          between DATA and gluster03::DATA-SLAVE Root password of
          gluster03 is required to complete the setup. NOTE: Password
          will not be stored. root@gluster03's password:  [    OK]
          gluster03 is Reachable(Port 22) [    OK] SSH Connection
          established root@gluster03 [    OK] Master Volume and Slave
          Volume are compatible (Version: 8.2) [    OK] Common secret
          pub file present at
          /var/lib/glusterd/geo-replication/common_secret.pem.pub [
             OK] common_secret.pem.pub file copied to gluster03 [    OK]
          Master SSH Keys copied to all Up Slave nodes [    OK] Updated
          Master SSH Keys to all Up Slave nodes authorized_keys file [
             OK] Geo-replication Session Established<br>
          &gt; Then I reboot the 3 servers...<br>
          &gt; After a while everything works ok, but after a few
          minutes, I get Faulty status in gluster01....<br>
          &gt; <br>
          &gt; There's the log<br>
          &gt; <br>
          &gt; <br>
          &gt; [2020-10-26 20:16:41.362584] I
          [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus:
          Worker Status Change [{status=Initializing...}] [2020-10-26
          20:16:41.362937] I [monitor(monitor):160:monitor] Monitor:
          starting gsyncd worker [{brick=/DATA/master01-data},
          {slave_node=gluster03}] [2020-10-26 20:16:41.508884] I
          [resource(worker /DATA/master01-data):1387:connect_remote]
          SSH: Initializing SSH connection between master and slave...
          [2020-10-26 20:16:42.996678] I [resource(worker
          /DATA/master01-data):1436:connect_remote] SSH: SSH connection
          between master and slave established. [{duration=1.4873}]
          [2020-10-26 20:16:42.997121] I [resource(worker
          /DATA/master01-data):1116:connect] GLUSTER: Mounting gluster
          volume locally... [2020-10-26 20:16:44.170661] E
          [syncdutils(worker /DATA/master01-data):110:gf_mount_ready]
          &lt;top&gt;: failed to get the xattr value [2020-10-26
          20:16:44.171281] I [resource(worker
          /DATA/master01-data):1139:connect] GLUSTER: Mounted gluster
          volume [{duration=1.1739}] [2020-10-26 20:16:44.171772] I
          [subcmds(worker /DATA/master01-data):84:subcmd_worker]
          &lt;top&gt;: Worker spawn successful. Acknowledging back to
          monitor [2020-10-26 20:16:46.200603] I [master(worker
          /DATA/master01-data):1645:register] _GMaster: Working dir
[{path=/var/lib/misc/gluster/gsyncd/DATA_gluster03_DATA-SLAVE/DATA-master01-data}]
          [2020-10-26 20:16:46.201798] I [resource(worker
          /DATA/master01-data):1292:service_loop] GLUSTER: Register time
          [{time=1603743406}] [2020-10-26 20:16:46.226415] I
          [gsyncdstatus(worker /DATA/master01-data):281:set_active]
          GeorepStatus: Worker Status Change [{status=Active}]
          [2020-10-26 20:16:46.395112] I [gsyncdstatus(worker
          /DATA/master01-data):253:set_worker_crawl_status]
          GeorepStatus: Crawl Status Change [{status=History Crawl}]
          [2020-10-26 20:16:46.396491] I [master(worker
          /DATA/master01-data):1559:crawl] _GMaster: starting history
          crawl [{turns=1}, {stime=(1603742506, 0)},{etime=1603743406},
          {entry_stime=(1603743226, 0)}] [2020-10-26 20:16:46.399292] E
          [resource(worker /DATA/master01-data):1312:service_loop]
          GLUSTER: Changelog History Crawl failed [{error=[Errno 0]
          Sucesso}] [2020-10-26 20:16:47.177205] I
          [monitor(monitor):228:monitor] Monitor: worker died in startup
          phase [{brick=/DATA/master01-data}] [2020-10-26
          20:16:47.184525] I
          [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus:
          Worker Status Change [{status=Faulty}]<br>
          &gt; <br>
          &gt; Any advice will be welcome.<br>
          &gt; <br>
          &gt; Thanks<br>
          &gt; <br>
          &gt; ---<br>
          &gt; Gilberto Nunes Ferreira<br>
          &gt; <br>
          &gt; <br>
          &gt; <br>
          &gt; <br>
          &gt; <br>
          &gt; <br>
          ________<br>
          <br>
          <br>
          <br>
          Community Meeting Calendar:<br>
          <br>
          Schedule -<br>
          Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
          Bridge: <a href="https://bluejeans.com/441850968"
            rel="noreferrer" target="_blank" moz-do-not-send="true">https://bluejeans.com/441850968</a><br>
          <br>
          Gluster-users mailing list<br>
          <a href="mailto:Gluster-users@gluster.org" target="_blank"
            moz-do-not-send="true">Gluster-users@gluster.org</a><br>
          <a
            href="https://lists.gluster.org/mailman/listinfo/gluster-users"
            rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
        </blockquote>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <pre class="moz-quote-pre" wrap="">________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: <a class="moz-txt-link-freetext" href="https://bluejeans.com/441850968">https://bluejeans.com/441850968</a>

Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a>
</pre>
    </blockquote>
  </body>
</html>