[Gluster-devel] [Gluster-users] User-serviceable snapshots design

Anand Avati avati at gluster.org
Thu May 8 17:58:18 UTC 2014


On Thu, May 8, 2014 at 4:48 AM, Jeff Darcy <jdarcy at redhat.com> wrote:

>
> If snapview-server runs on all servers, how does a particular client
> decide which one to use?  Do we need to do something to avoid hot spots?
>
> Overall, it seems like having clients connect *directly* to the snapshot
> volumes once they've been started might have avoided some complexity or
> problems.  Was this considered?
>

Yes this was considered. I have mentioned the two reasons why this was
dropped in the other mail. They were: a) snap view generation requires
privileged ops to glusterd. So moving this task to the server side solves a
lot of those challenges. b) keep tab on total number of connections in the
system and don't explore the connections with more clients (given that
there can be lots of snapshots.)


> > > * How does snapview-server manage user credentials for connecting
> > >    to snap bricks?  What if multiple users try to use the same
> > >    snapshot at the same time?  How does any of this interact with
> > >    on-wire or on-disk encryption?
> >
> > No interaction with on-disk or on-wire encryption. Multiple users can
> > always access the same snapshot (volume) at the same time. Why do you
> > see any restrictions there?
>
> If we're using either on-disk or on-network encryption, client keys and
> certificates must remain on the clients.  They must not be on servers.
> If the volumes are being proxied through snapview-server, it needs
> those credentials, but letting it have them defeats both security
> mechanisms.
>

The encryption xlator sits on top of snapview-client on the client side,
and should be able to decrypt file content whether coming from a snap view
or the main volume. keys and certs remain on the client. But thanks for
mentioning this, we need to spin up an instance of locks xlator on top of
snapview-server to satisfy the locking requests from crypt.

Avati
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20140508/57351904/attachment-0002.html>


More information about the Gluster-devel mailing list