<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <p><br>
    </p>
    <br>
    <div class="moz-cite-prefix">On 05/01/2017 11:47 AM, Pranith Kumar
      Karampuri wrote:<br>
    </div>
    <blockquote
cite="mid:CAOgeEnaTG=01=6vW85vKu24L1RO70aDL2d-TvYf6Qg6h6eaJcw@mail.gmail.com"
      type="cite">
      <div dir="ltr"><br>
        <div class="gmail_extra"><br>
          <div class="gmail_quote">On Tue, May 2, 2017 at 12:14 AM,
            Shyam <span dir="ltr">&lt;<a moz-do-not-send="true"
                href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>&gt;</span>
            wrote:<br>
            <blockquote class="gmail_quote" style="margin:0 0 0
              .8ex;border-left:1px #ccc solid;padding-left:1ex"><span
                class="">On 05/01/2017 02:42 PM, Pranith Kumar Karampuri
                wrote:<br>
              </span>
              <blockquote class="gmail_quote" style="margin:0 0 0
                .8ex;border-left:1px #ccc solid;padding-left:1ex"><span
                  class="">
                  <br>
                  <br>
                  On Tue, May 2, 2017 at 12:07 AM, Shyam &lt;<a
                    moz-do-not-send="true"
                    href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a><br>
                </span><span class="">
                  &lt;mailto:<a moz-do-not-send="true"
                    href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>&gt;&gt;
                  wrote:<br>
                  <br>
                      On 05/01/2017 02:23 PM, Pranith Kumar Karampuri
                  wrote:<br>
                  <br>
                  <br>
                  <br>
                          On Mon, May 1, 2017 at 11:43 PM, Shyam &lt;<a
                    moz-do-not-send="true"
                    href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a><br>
                          &lt;mailto:<a moz-do-not-send="true"
                    href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>&gt;<br>
                </span>
                <div>
                  <div class="h5">
                            &lt;mailto:<a moz-do-not-send="true"
                      href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>
                    &lt;mailto:<a moz-do-not-send="true"
                      href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>&gt;&gt;&gt;
                    wrote:<br>
                    <br>
                                On 05/01/2017 02:00 PM, Pranith Kumar
                    Karampuri wrote:<br>
                    <br>
                                        Splitting the bricks need not be
                    a post factum<br>
                            decision, we can<br>
                                        start with larger brick counts,
                    on a given node/disk<br>
                            count, and<br>
                                        hence spread these bricks to
                    newer nodes/bricks as<br>
                            they are<br>
                                    added.<br>
                    <br>
                    <br>
                                    Let's say we have 1 disk, we format
                    it with say XFS and that<br>
                                    becomes a<br>
                                    brick at the moment. Just curious,
                    what will be the<br>
                            relationship<br>
                                    between<br>
                                    brick to disk in this case(If we
                    leave out LVM for this<br>
                            example)?<br>
                    <br>
                    <br>
                                I would assume the relation is brick to
                    provided FS<br>
                            directory (not<br>
                                brick to disk, we do not control that at
                    the moment, other than<br>
                                providing best practices around the
                    same).<br>
                    <br>
                    <br>
                            Hmmm... as per my understanding, if we do
                    this then 'df' I guess<br>
                            will<br>
                            report wrong values?
                    available-size/free-size etc will be<br>
                            counted more<br>
                            than once?<br>
                    <br>
                    <br>
                        This is true even today, if anyone uses 2 bricks
                    from the same mount.<br>
                    <br>
                    <br>
                    That is the reason why documentation is the way it
                    is as far as I can<br>
                    remember.<br>
                    <br>
                    <br>
                    <br>
                        I forgot a converse though, we could take a disk
                    and partition it<br>
                        (LVM thinp volumes) and use each of those
                    partitions as bricks,<br>
                        avoiding the problem of df double counting.
                    Further thinp will help<br>
                        us expand available space to other bricks on the
                    same disk, as we<br>
                        destroy older bricks or create new ones to
                    accommodate the moving<br>
                        pieces (needs more careful thought though, but
                    for sure is a<br>
                        nightmare without thinp).<br>
                    <br>
                        I am not so much a fan of large number of thinp
                    partitions, so as<br>
                        long as that is reasonably in control, we can
                    possibly still use it.<br>
                        The big advantage though is, we nuke a thinp
                    volume when the brick<br>
                        that uses that partition, moves out of that
                    disk, and we get the<br>
                        space back, rather than having or to something
                    akin to rm -rf on the<br>
                        backend to reclaim space.<br>
                    <br>
                    <br>
                    Other way to achieve the same is to leverage the
                    quota functionality of<br>
                    counting how much size is used under a directory.<br>
                  </div>
                </div>
              </blockquote>
              <br>
              Yes, I think this is the direction to solve the 2 bricks
              on a single FS as well. Also, IMO, the weight of
              accounting at each directory level that quota brings in
              seems/is heavyweight to solve just *this* problem.</blockquote>
            <div><br>
            </div>
            <div>I saw some github issues where Sanoj is exploring
              XFS-quota integration. Project Quota ideas which are a bit
              less heavy would be nice too. Actually all these issues
              are very much interlinked.<br>
              <br>
            </div>
            <div>It all seems to point that we basically need to
              increase granularity of brick and solve problems that come
              up as we go along.<br>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    I'd stay away from anything that requires a specific filesystem
    backend. Alternative brick filesystems are way too popular to add a
    hard requirement.<br>
    <br>
    <blockquote
cite="mid:CAOgeEnaTG=01=6vW85vKu24L1RO70aDL2d-TvYf6Qg6h6eaJcw@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div> </div>
            <blockquote class="gmail_quote" style="margin:0 0 0
              .8ex;border-left:1px #ccc solid;padding-left:1ex">
              <div class="HOEnZb">
                <div class="h5"><br>
                  <br>
                  <blockquote class="gmail_quote" style="margin:0 0 0
                    .8ex;border-left:1px #ccc solid;padding-left:1ex">
                    <br>
                    <br>
                    <br>
                    <br>
                    <br>
                    <br>
                    <br>
                                Today, gluster takes in a directory on
                    host as a brick, and<br>
                            assuming<br>
                                we retain that, we would need to split
                    this into multiple<br>
                            sub-dirs<br>
                                and use each sub-dir as a brick
                    internally.<br>
                    <br>
                                All these sub-dirs thus created are part
                    of the same volume<br>
                            (due to<br>
                                our current snapshot mapping
                    requirements).<br>
                    <br>
                    <br>
                    <br>
                    <br>
                            --<br>
                            Pranith<br>
                    <br>
                    <br>
                    <br>
                    <br>
                    --<br>
                    Pranith<br>
                  </blockquote>
                </div>
              </div>
            </blockquote>
          </div>
          <br>
          <br clear="all">
          <br>
          -- <br>
          <div class="gmail_signature" data-smartmail="gmail_signature">
            <div dir="ltr">Pranith<br>
            </div>
          </div>
        </div>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users">http://lists.gluster.org/mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <br>
  </body>
</html>