<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>I have completed the patches and pushed for reviews. Please feel
      free to raise your review concerns/suggestions.</p>
    <p><br>
    </p>
    <p><a href="https://www.google.com/url?q=https%3A%2F%2Freview.gluster.org%2F%23%2Fc%2Fglusterfs%2F%2B%2F21868&amp;sa=D&amp;ust=1547360636897000&amp;usg=AFQjCNFBLolUmZYRP05J_7GbdYXGSs_Wcg" target="_blank" style="text-decoration: underline; color: rgb(66, 133, 244); font-family: Roboto, Arial, sans-serif; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: 0.2px; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255);">https://review.gluster.org/#/c/glusterfs/+/21868</a><span style="color: rgb(60, 64, 67); font-family: Roboto, Arial, sans-serif; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: 0.2px; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255); text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;"> </span></p>
    <p><a href="https://www.google.com/url?q=https%3A%2F%2Freview.gluster.org%2F%23%2Fc%2Fglusterfs%2F%2B%2F21907&amp;sa=D&amp;ust=1547360636897000&amp;usg=AFQjCNFT0DYFibCGY_n8a3JdF53-5L1Jrw" target="_blank" style="text-decoration: underline; color: rgb(66, 133, 244); font-family: Roboto, Arial, sans-serif; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: 0.2px; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255);">https://review.gluster.org/#/c/glusterfs/+/21907</a><span style="color: rgb(60, 64, 67); font-family: Roboto, Arial, sans-serif; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: 0.2px; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255); text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;"> </span></p>
    <p><a href="https://www.google.com/url?q=https%3A%2F%2Freview.gluster.org%2F%23%2Fc%2Fglusterfs%2F%2B%2F21960&amp;sa=D&amp;ust=1547360636897000&amp;usg=AFQjCNEZUnX5MjDJNvowrRNQzIjnGX-Skg" target="_blank" style="text-decoration: underline; color: rgb(66, 133, 244); font-family: Roboto, Arial, sans-serif; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: 0.2px; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255);">https://review.gluster.org/#/c/glusterfs/+/21960</a></p>
    <p><a href="https://www.google.com/url?q=https%3A%2F%2Freview.gluster.org%2F%23%2Fc%2Fglusterfs%2F%2B%2F21960&amp;sa=D&amp;ust=1547360636897000&amp;usg=AFQjCNEZUnX5MjDJNvowrRNQzIjnGX-Skg" target="_blank" style="text-decoration: underline; color: rgb(66, 133, 244); font-family: Roboto, Arial, sans-serif; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: 0.2px; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255);">https://review.gluster.org/#/c/glusterfs/+/21989/</a></p>
    <p><br>
    </p>
    <p>Regards</p>
    <p>Rafi KC<br>
    </p>
    <p><br>
    </p>
    <div class="moz-cite-prefix">On 12/24/18 3:58 PM, RAFI KC wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:ce616aeb-7a87-4ce7-1822-ee1c006fd3ed@redhat.com">
      <br>
      On 12/21/18 6:56 PM, Sankarshan Mukhopadhyay wrote:
      <br>
      <blockquote type="cite">On Fri, Dec 21, 2018 at 6:30 PM RAFI KC
        <a class="moz-txt-link-rfc2396E" href="mailto:rkavunga@redhat.com">&lt;rkavunga@redhat.com&gt;</a> wrote:
        <br>
        <blockquote type="cite">Hi All,
          <br>
          <br>
          What is the problem?
          <br>
          As of now self-heal client is running as one daemon per node,
          this means
          <br>
          even if there are multiple volumes, there will only be one
          self-heal
          <br>
          daemon. So to take effect of each configuration changes in the
          cluster,
          <br>
          the self-heal has to be reconfigured. But it doesn't have
          ability to
          <br>
          dynamically reconfigure. Which means when you have lot of
          volumes in the
          <br>
          cluster, every management operation that involves
          configurations changes
          <br>
          like volume start/stop, add/remove brick etc will result in
          self-heal
          <br>
          daemon restart. If such operation is executed more often, it
          is not only
          <br>
          slow down self-heal for a volume, but also increases the
          slef-heal logs
          <br>
          substantially.
          <br>
        </blockquote>
        What is the value of the number of volumes when you write "lot
        of
        <br>
        volumes"? 1000 volumes, more etc
        <br>
      </blockquote>
      <br>
      Yes, more than 1000 volumes. It also depends on how often you
      execute glusterd management operations (mentioned above). Each
      time self heal daemon is restarted, it prints the entire graph.
      This graph traces in the log will contribute the majority it's
      size.
      <br>
      <br>
      <br>
      <blockquote type="cite">
        <br>
        <blockquote type="cite">
          <br>
          How to fix it?
          <br>
          <br>
          We are planning to follow a similar procedure as attach/detach
          graphs
          <br>
          dynamically which is similar to brick multiplex. The detailed
          steps is
          <br>
          as below,
          <br>
          <br>
          <br>
          <br>
          <br>
          1) First step is to make shd per volume daemon, to
          generate/reconfigure
          <br>
          volfiles per volume basis .
          <br>
          <br>
              1.1) This will help to attach the volfiles easily to
          existing shd daemon
          <br>
          <br>
              1.2) This will help to send notification to shd daemon as
          each
          <br>
          volinfo keeps the daemon object
          <br>
          <br>
              1.3) reconfiguring a particular subvolume is easier as we
          can check
          <br>
          the topology better
          <br>
          <br>
              1.4) With this change the volfiles will be moved to
          workdir/vols/
          <br>
          directory.
          <br>
          <br>
          2) Writing new rpc requests like attach/detach_client_graph
          function to
          <br>
          support clients attach/detach
          <br>
          <br>
              2.1) Also functions like graph reconfigure,
          mgmt_getspec_cbk has to
          <br>
          be modified
          <br>
          <br>
          3) Safely detaching a subvolume when there are pending frames
          to unwind.
          <br>
          <br>
              3.1) We can mark the client disconnected and make all the
          frames to
          <br>
          unwind with ENOTCONN
          <br>
          <br>
              3.2) We can wait all the i/o to unwind until the new
          updated subvol
          <br>
          attaches
          <br>
          <br>
          4) Handle scenarios like glusterd restart, node reboot, etc
          <br>
          <br>
          <br>
          <br>
          At the moment we are not planning to limit the number of heal
          subvolmes
          <br>
          per process as, because with the current approach also for
          every volume
          <br>
          heal was doing from a single process. We have not heared any
          major
          <br>
          complains on this?
          <br>
        </blockquote>
        Is the plan to not ever limit or, have a throttle set to a
        default
        <br>
        high(er) value? How would system resources be impacted if the
        proposed
        <br>
        design is implemented?
        <br>
      </blockquote>
      <br>
      The plan is to implement in a way that it can support more than
      one multiplexed self-heal daemon. The throttling function as of
      now returns the same process to multiplex, but it can be easily
      modified to create a new process.
      <br>
      <br>
      This multiplexing logic won't utilize any additional resources
      that it currently does.
      <br>
      <br>
      <br>
      Rafi KC
      <br>
      <br>
      <br>
      <blockquote type="cite">_______________________________________________
        <br>
        Gluster-devel mailing list
        <br>
        <a class="moz-txt-link-abbreviated" href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>
        <br>
        <a class="moz-txt-link-freetext" href="https://lists.gluster.org/mailman/listinfo/gluster-devel">https://lists.gluster.org/mailman/listinfo/gluster-devel</a>
        <br>
      </blockquote>
    </blockquote>
  </body>
</html>