[Gluster-users] Add single server
Joe Julian
joe at julianfamily.org
Mon May 1 20:42:41 UTC 2017
On 05/01/2017 11:47 AM, Pranith Kumar Karampuri wrote:
>
>
> On Tue, May 2, 2017 at 12:14 AM, Shyam <srangana at redhat.com
> <mailto:srangana at redhat.com>> wrote:
>
> On 05/01/2017 02:42 PM, Pranith Kumar Karampuri wrote:
>
>
>
> On Tue, May 2, 2017 at 12:07 AM, Shyam <srangana at redhat.com
> <mailto:srangana at redhat.com>
> <mailto:srangana at redhat.com <mailto:srangana at redhat.com>>> wrote:
>
> On 05/01/2017 02:23 PM, Pranith Kumar Karampuri wrote:
>
>
>
> On Mon, May 1, 2017 at 11:43 PM, Shyam
> <srangana at redhat.com <mailto:srangana at redhat.com>
> <mailto:srangana at redhat.com <mailto:srangana at redhat.com>>
> <mailto:srangana at redhat.com
> <mailto:srangana at redhat.com> <mailto:srangana at redhat.com
> <mailto:srangana at redhat.com>>>> wrote:
>
> On 05/01/2017 02:00 PM, Pranith Kumar Karampuri wrote:
>
> Splitting the bricks need not be a post factum
> decision, we can
> start with larger brick counts, on a given
> node/disk
> count, and
> hence spread these bricks to newer
> nodes/bricks as
> they are
> added.
>
>
> Let's say we have 1 disk, we format it with
> say XFS and that
> becomes a
> brick at the moment. Just curious, what will
> be the
> relationship
> between
> brick to disk in this case(If we leave out LVM
> for this
> example)?
>
>
> I would assume the relation is brick to provided FS
> directory (not
> brick to disk, we do not control that at the
> moment, other than
> providing best practices around the same).
>
>
> Hmmm... as per my understanding, if we do this then
> 'df' I guess
> will
> report wrong values? available-size/free-size etc will be
> counted more
> than once?
>
>
> This is true even today, if anyone uses 2 bricks from the
> same mount.
>
>
> That is the reason why documentation is the way it is as far
> as I can
> remember.
>
>
>
> I forgot a converse though, we could take a disk and
> partition it
> (LVM thinp volumes) and use each of those partitions as
> bricks,
> avoiding the problem of df double counting. Further thinp
> will help
> us expand available space to other bricks on the same
> disk, as we
> destroy older bricks or create new ones to accommodate the
> moving
> pieces (needs more careful thought though, but for sure is a
> nightmare without thinp).
>
> I am not so much a fan of large number of thinp
> partitions, so as
> long as that is reasonably in control, we can possibly
> still use it.
> The big advantage though is, we nuke a thinp volume when
> the brick
> that uses that partition, moves out of that disk, and we
> get the
> space back, rather than having or to something akin to rm
> -rf on the
> backend to reclaim space.
>
>
> Other way to achieve the same is to leverage the quota
> functionality of
> counting how much size is used under a directory.
>
>
> Yes, I think this is the direction to solve the 2 bricks on a
> single FS as well. Also, IMO, the weight of accounting at each
> directory level that quota brings in seems/is heavyweight to solve
> just *this* problem.
>
>
> I saw some github issues where Sanoj is exploring XFS-quota
> integration. Project Quota ideas which are a bit less heavy would be
> nice too. Actually all these issues are very much interlinked.
>
> It all seems to point that we basically need to increase granularity
> of brick and solve problems that come up as we go along.
I'd stay away from anything that requires a specific filesystem backend.
Alternative brick filesystems are way too popular to add a hard requirement.
>
>
>
>
>
>
>
>
>
> Today, gluster takes in a directory on host as a
> brick, and
> assuming
> we retain that, we would need to split this into
> multiple
> sub-dirs
> and use each sub-dir as a brick internally.
>
> All these sub-dirs thus created are part of the
> same volume
> (due to
> our current snapshot mapping requirements).
>
>
>
>
> --
> Pranith
>
>
>
>
> --
> Pranith
>
>
>
>
> --
> Pranith
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170501/52980761/attachment.html>
More information about the Gluster-users
mailing list