[Gluster-users] User-serviceable snapshots design

Jeff Darcy jdarcy at redhat.com
Thu May 8 11:48:37 UTC 2014


> > * Since a snap volume will refer to multiple bricks, we'll need
> >    more brick daemons as well.  How are *those* managed?
> 
> This is infra handled by the "core" snapshot functionality/feature. When
> a snap is created, it is treated not only as a lvm2 thin-lv but as a
> glusterfs volume as well. The snap volume is activated and mounted and
> made available for regular use through the native fuse-protocol client.
> Management of these is not part of the USS feature. But handled as part
> of the core snapshot implementation.

If we're auto-starting snapshot volumes, are we auto-stopping them as
well?  According to what policy?

> USS (mainly snapview-server xlator)
> talks to the snapshot volumes (and hence the bricks) through the glfs_t
> *, and passing a glfs_object pointer.

So snapview-server is using GFAPI from within a translator?  This caused
a *lot* of problems in NSR reconciliation, especially because of how
GFAPI constantly messes around with the "THIS" pointer.  Does the USS
work include fixing these issues?

If snapview-server runs on all servers, how does a particular client
decide which one to use?  Do we need to do something to avoid hot spots?

Overall, it seems like having clients connect *directly* to the snapshot
volumes once they've been started might have avoided some complexity or
problems.  Was this considered?

> > * How does snapview-server manage user credentials for connecting
> >    to snap bricks?  What if multiple users try to use the same
> >    snapshot at the same time?  How does any of this interact with
> >    on-wire or on-disk encryption?
> 
> No interaction with on-disk or on-wire encryption. Multiple users can
> always access the same snapshot (volume) at the same time. Why do you
> see any restrictions there?

If we're using either on-disk or on-network encryption, client keys and
certificates must remain on the clients.  They must not be on servers.
If the volumes are being proxied through snapview-server, it needs
those credentials, but letting it have them defeats both security
mechanisms.

Also, do we need to handle the case where the credentials have changed
since the snapshot was taken?  This is probably a more general problem
with snapshots themselves, but still needs to be considered.



More information about the Gluster-users mailing list