[Gluster-users] Add single server

Pranith Kumar Karampuri pkarampu at redhat.com
Mon May 1 16:57:16 UTC 2017


On Mon, May 1, 2017 at 10:00 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:

> 2017-05-01 18:23 GMT+02:00 Pranith Kumar Karampuri <pkarampu at redhat.com>:
> > IMHO It is difficult to implement what you are asking for without
> metadata
> > server which stores where each replica is stored.
>
> Can't you distribute a sort of file mapping to each node ?
> AFAIK , gluster already has some metadata stored in the cluster, what
> is missing is a mapping between each file/shard and brick.
>

Yes this is precisely what all the other SDS with metadata servers kind of
do. They kind of keep a map of on what all servers a particular file/blob
is stored in a metadata server. GlusterFS doesn't do that. In GlusterFS
what bricks need to be replicated is always given and distribute layer on
top of these replication layer will do the job of distributing and fetching
the data. Because replication happens at a brick level and not at a file
level and distribute happens on top of replication and not at file level.
There isn't too much metadata that needs to be stored per file. Hence no
need for separate metadata servers.


> Maybe a simple DB (just as an idea: sqlite, berkeleydb, ...) stored in
> a fixed location on gluster itself, being replicated across nodes.
>
If you know path of the file, you can always know where the file is stored
using pathinfo:
Method-2 in the following link:
https://gluster.readthedocs.io/en/latest/Troubleshooting/gfid-to-path/

You don't need any db.

Basically what you want, if I understood correctly is:
If we add a 3rd node with just one disk, the data should automatically
arrange itself splitting itself to 3 categories(Assuming replica-2)
1) Files that are present in Node1, Node2
2) Files that are present in Node2, Node3
3) Files that are present in Node1, Node3

As you can see we arrived at a contradiction where all the nodes should
have at least 2 bricks but there is only 1 disk. Hence the contradiction.
We can't do what you are asking without brick splitting. i.e. we need to
split the disk into 2 bricks.

-- 
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170501/43424e03/attachment.html>


More information about the Gluster-users mailing list