[Gluster-devel] [Gluster-users] User-serviceable snapshots design

Anand Avati avati at gluster.org
Thu May 8 17:46:53 UTC 2014


On Thu, May 8, 2014 at 4:45 AM, Ira Cooper <ira at redhat.com> wrote:

> Also inline.
>
> ----- Original Message -----
>
> > The scalability factor I mentioned simply had to do with the core
> > infrastructure (depending on very basic mechanisms like the epoll wait
> > thread, the entire end-to-end flow of a single fop like say, a lookup()
> > here). Even though this was contained to an extent by the introduction
> > of the io-threads xlator in snapd, it is still a complex path that is
> > not exactly about high performance design. That wasn't the goal to begin
> > with.
>
> Yes, if you get rid of the daemon it doesn't have those issues ;).
>
> > I am not sure what the linear range versus a non-linear one has to do
> > with the design? Maybe you are seeing something that I miss. A random
> > gfid is generated in the snapview-server xlator on lookups. The
> > snapview-client is a kind of a basic redirector that detects when a
> > reference is made to a "virtual" inode (based on stored context) and
> > simply redirects to the snapd daemon. It stores the info returned from
> > snapview-server, capturing the essential inode info in the inode context
> > (note this is the client side inode we are talking abt).
>
> That last note, is merely a warning against changing the properties of the
> UUID generator, please ignore it.
>
> > In the daemon there is another level of translation which needs to
> > associate this gfid with an inode in the context of the protocol-server
> > xlator. The next step of the translation is that this inode needs to be
> > translated to the actual gfid on disk - that is the only on-disk gfid
> > which exists in one of the snapshotted gluster volumes. To that extent
> > the snapview-s xlator needs to know which is the glfs_t structure to
> > access so it can get to the right gfapi graph. Once it knows that, it
> > can access any object in that gfapi graph using the glfs_object (which
> > has the real inode info from the gfapi world and the actual on-disk
> gfid).
>
> No daemon!  SCRAP IT!  Throw it in the bin, and don't let it climb back
> out.
>
> What you are proposing: random gfid -> real gfid ; as the mapping the
> daemon must maintain.
>
> What I am proposing: real gfid + offset -> real gfid ; offset is a per
> snapshot value, local to the client.
>
> Because the lookup table is now trivial, a single integer per snapshot.
>  You don't need all that complex infrastructure.
>

The purpose for the existence of the daemon is two:

- client cannot perform privileged ops to glusterd regarding listing of
snaps etc.

- limit the total number of connections coming to bricks. If each client
has a new set of connections to each of the snpashot bricks, the total
number of connections in the system will become a function of the total
number of clients * total number of snapshots.

gfid management is something completely orthogonal, we can use the current
random gfid or a more deterministic one (going to require a LOT more
changes to make gfids deterministic, and what about already assigned ones,
etc.) whether the .snap view is generated on client side or server side.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20140508/fd772ba3/attachment-0002.html>


More information about the Gluster-devel mailing list