<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 2, 2017 at 12:14 AM, Shyam <span dir="ltr"><<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 05/01/2017 02:42 PM, Pranith Kumar Karampuri wrote:<br>
</span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
<br>
<br>
On Tue, May 2, 2017 at 12:07 AM, Shyam <<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a><br></span><span class="">
<mailto:<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>>> wrote:<br>
<br>
On 05/01/2017 02:23 PM, Pranith Kumar Karampuri wrote:<br>
<br>
<br>
<br>
On Mon, May 1, 2017 at 11:43 PM, Shyam <<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a><br>
<mailto:<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>><br></span><div><div class="h5">
<mailto:<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a> <mailto:<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>>>> wrote:<br>
<br>
On 05/01/2017 02:00 PM, Pranith Kumar Karampuri wrote:<br>
<br>
Splitting the bricks need not be a post factum<br>
decision, we can<br>
start with larger brick counts, on a given node/disk<br>
count, and<br>
hence spread these bricks to newer nodes/bricks as<br>
they are<br>
added.<br>
<br>
<br>
Let's say we have 1 disk, we format it with say XFS and that<br>
becomes a<br>
brick at the moment. Just curious, what will be the<br>
relationship<br>
between<br>
brick to disk in this case(If we leave out LVM for this<br>
example)?<br>
<br>
<br>
I would assume the relation is brick to provided FS<br>
directory (not<br>
brick to disk, we do not control that at the moment, other than<br>
providing best practices around the same).<br>
<br>
<br>
Hmmm... as per my understanding, if we do this then 'df' I guess<br>
will<br>
report wrong values? available-size/free-size etc will be<br>
counted more<br>
than once?<br>
<br>
<br>
This is true even today, if anyone uses 2 bricks from the same mount.<br>
<br>
<br>
That is the reason why documentation is the way it is as far as I can<br>
remember.<br>
<br>
<br>
<br>
I forgot a converse though, we could take a disk and partition it<br>
(LVM thinp volumes) and use each of those partitions as bricks,<br>
avoiding the problem of df double counting. Further thinp will help<br>
us expand available space to other bricks on the same disk, as we<br>
destroy older bricks or create new ones to accommodate the moving<br>
pieces (needs more careful thought though, but for sure is a<br>
nightmare without thinp).<br>
<br>
I am not so much a fan of large number of thinp partitions, so as<br>
long as that is reasonably in control, we can possibly still use it.<br>
The big advantage though is, we nuke a thinp volume when the brick<br>
that uses that partition, moves out of that disk, and we get the<br>
space back, rather than having or to something akin to rm -rf on the<br>
backend to reclaim space.<br>
<br>
<br>
Other way to achieve the same is to leverage the quota functionality of<br>
counting how much size is used under a directory.<br>
</div></div></blockquote>
<br>
Yes, I think this is the direction to solve the 2 bricks on a single FS as well. Also, IMO, the weight of accounting at each directory level that quota brings in seems/is heavyweight to solve just *this* problem.</blockquote><div><br></div><div>I saw some github issues where Sanoj is exploring XFS-quota integration. Project Quota ideas which are a bit less heavy would be nice too. Actually all these issues are very much interlinked.<br><br></div><div>It all seems to point that we basically need to increase granularity of brick and solve problems that come up as we go along.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5"><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
<br>
<br>
<br>
<br>
<br>
Today, gluster takes in a directory on host as a brick, and<br>
assuming<br>
we retain that, we would need to split this into multiple<br>
sub-dirs<br>
and use each sub-dir as a brick internally.<br>
<br>
All these sub-dirs thus created are part of the same volume<br>
(due to<br>
our current snapshot mapping requirements).<br>
<br>
<br>
<br>
<br>
--<br>
Pranith<br>
<br>
<br>
<br>
<br>
--<br>
Pranith<br>
</blockquote>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</div></div>