<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, May 1, 2017 at 11:43 PM, Shyam <span dir="ltr"><<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 05/01/2017 02:00 PM, Pranith Kumar Karampuri wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Splitting the bricks need not be a post factum decision, we can<br>
start with larger brick counts, on a given node/disk count, and<br>
hence spread these bricks to newer nodes/bricks as they are added.<br>
<br>
<br>
Let's say we have 1 disk, we format it with say XFS and that becomes a<br>
brick at the moment. Just curious, what will be the relationship between<br>
brick to disk in this case(If we leave out LVM for this example)?<br>
</blockquote>
<br></span>
I would assume the relation is brick to provided FS directory (not brick to disk, we do not control that at the moment, other than providing best practices around the same).<br></blockquote><div><br></div><div>Hmmm... as per my understanding, if we do this then 'df' I guess will report wrong values? available-size/free-size etc will be counted more than once?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Today, gluster takes in a directory on host as a brick, and assuming we retain that, we would need to split this into multiple sub-dirs and use each sub-dir as a brick internally.<br>
<br>
All these sub-dirs thus created are part of the same volume (due to our current snapshot mapping requirements).<br>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</div></div>