[Gluster-devel] [Gluster-users] User-serviceable snapshots design

Jeff Darcy jdarcy at redhat.com
Thu May 8 14:47:11 UTC 2014


> > Overall, it seems like having clients connect *directly* to the
> > snapshot volumes once they've been started might have avoided some
> > complexity or problems.  Was this considered?
>
> Can you explain this in more detail? Are you saying that the virtual
> namespace overlay used by the current design can be reused along with
> returning extra info to clients or is this a new approach where you
> make the clients much more intelligent than they are in the current
> approach?

Basically the clients would have the same intelligence that now
resides in snapview-server.  Instead of spinning up a new
protocol/client to talk to a new snapview-server, they'd send a single
RPC to start the snapshot brick daemons, then connect to those itself.

Of course, this exacerbates the problem with dynamically changing
translator graphs on the client side, because now they dynamically added
parts will be whole trees (corresponding to whole volfiles) instead of
single protocol/client translators.  Long term, I think we should
consider *not* handling these overlays as modifications to the main
translator graph, but instead allowing multiple translator graphs to be
active in the glusterfs process concurrently.  For example, this greatly
simplifies the question of how to deal with a graph change after we've
added several overlays.

 * "Splice" method: graph comparisons must be enhanced to ignore the
   overlays, overlays must be re-added after the graph switch takes
   place, etc.

 * "Multiple graph" method: just change the main graph (the one that's
   rooted at mount/fuse) and leave the others alone.

Stray thought: does any of this break when we're in an NFS or Samba
daemon instead of a native-mount glusterfs daemon?

> > If we're using either on-disk or on-network encryption, client keys
> > and certificates must remain on the clients.  They must not be on
> > servers.  If the volumes are being proxied through snapview-server,
> > it needs those credentials, but letting it have them defeats both
> > security mechanisms.
> >
> > Also, do we need to handle the case where the credentials have
> > changed since the snapshot was taken?  This is probably a more
> > general problem with snapshots themselves, but still needs to be
> > considered.
>
> Agreed. Very nice point you brought up. We will need to think a bit
> more on this Jeff.

This is what reviews are for.  ;)  Another thought: are there any
interesting security implications because USS allows one user to expose
*other users'* previous versions through the automatically mounted
snapshot?



More information about the Gluster-devel mailing list