[Gluster-users] Add single server
Shyam
srangana at redhat.com
Mon May 1 18:54:33 UTC 2017
On 05/01/2017 02:47 PM, Pranith Kumar Karampuri wrote:
>
>
> On Tue, May 2, 2017 at 12:14 AM, Shyam <srangana at redhat.com
> <mailto:srangana at redhat.com>> wrote:
>
> On 05/01/2017 02:42 PM, Pranith Kumar Karampuri wrote:
>
>
>
> On Tue, May 2, 2017 at 12:07 AM, Shyam <srangana at redhat.com
> <mailto:srangana at redhat.com>
> <mailto:srangana at redhat.com <mailto:srangana at redhat.com>>> wrote:
>
> On 05/01/2017 02:23 PM, Pranith Kumar Karampuri wrote:
>
>
>
> On Mon, May 1, 2017 at 11:43 PM, Shyam
> <srangana at redhat.com <mailto:srangana at redhat.com>
> <mailto:srangana at redhat.com <mailto:srangana at redhat.com>>
> <mailto:srangana at redhat.com <mailto:srangana at redhat.com>
> <mailto:srangana at redhat.com <mailto:srangana at redhat.com>>>> wrote:
>
> On 05/01/2017 02:00 PM, Pranith Kumar Karampuri wrote:
>
> Splitting the bricks need not be a post factum
> decision, we can
> start with larger brick counts, on a given
> node/disk
> count, and
> hence spread these bricks to newer
> nodes/bricks as
> they are
> added.
>
>
> Let's say we have 1 disk, we format it with say
> XFS and that
> becomes a
> brick at the moment. Just curious, what will be the
> relationship
> between
> brick to disk in this case(If we leave out LVM
> for this
> example)?
>
>
> I would assume the relation is brick to provided FS
> directory (not
> brick to disk, we do not control that at the moment,
> other than
> providing best practices around the same).
>
>
> Hmmm... as per my understanding, if we do this then 'df'
> I guess
> will
> report wrong values? available-size/free-size etc will be
> counted more
> than once?
>
>
> This is true even today, if anyone uses 2 bricks from the
> same mount.
>
>
> That is the reason why documentation is the way it is as far as
> I can
> remember.
>
>
>
> I forgot a converse though, we could take a disk and
> partition it
> (LVM thinp volumes) and use each of those partitions as bricks,
> avoiding the problem of df double counting. Further thinp
> will help
> us expand available space to other bricks on the same disk,
> as we
> destroy older bricks or create new ones to accommodate the
> moving
> pieces (needs more careful thought though, but for sure is a
> nightmare without thinp).
>
> I am not so much a fan of large number of thinp partitions,
> so as
> long as that is reasonably in control, we can possibly still
> use it.
> The big advantage though is, we nuke a thinp volume when the
> brick
> that uses that partition, moves out of that disk, and we get the
> space back, rather than having or to something akin to rm
> -rf on the
> backend to reclaim space.
>
>
> Other way to achieve the same is to leverage the quota
> functionality of
> counting how much size is used under a directory.
>
>
> Yes, I think this is the direction to solve the 2 bricks on a single
> FS as well. Also, IMO, the weight of accounting at each directory
> level that quota brings in seems/is heavyweight to solve just *this*
> problem.
>
>
> I saw some github issues where Sanoj is exploring XFS-quota integration.
> Project Quota ideas which are a bit less heavy would be nice too.
> Actually all these issues are very much interlinked.
Yes, while discussing DHT2, Quota-2 [1] was discussed and project quotas
and how to leverage the design in gluster was also discussed. IMO
(again), this would be the right way forward for quota (orthogonal to
this discussion, but still).
[1] Quota-2 discussion:
http://lists.gluster.org/pipermail/gluster-devel/2015-December/047443.html
>
> It all seems to point that we basically need to increase granularity of
> brick and solve problems that come up as we go along.
More information about the Gluster-users
mailing list