[Gluster-devel] Fwd: Snapshot feature design review

Fred van Zwieten fvzwieten at vxcompany.com
Tue Oct 29 11:33:20 UTC 2013


Another great "add-on" would be to have some sort of snapmirror feature
build using the georeplication engine. Again, look at NetApp's
implementation for inspiration.

An implementation could be like this: Be able to georeplicate a snapshot of
a volume to a target and stop and remove the georepl connection when done.
Doing that repeatedly to the same target, but from a next snapshot, will
result in a snapshot based replication where the rsync based georeplication
only replicates the delta. The result is that the target is the same as the
snapshot and only the diff is synced. Ideal for failover scenarios where
the target must be in a (crash) consistent state.

NetApp's implementation is different in that the snapshot states are
preserved on the target. That would mean you must be able to georepl from a
snapshot on the source volume to a newly created writeable snapshot on the
target volume. Once the basic snapshot implementation is done, that would
be rather easy to implement.

Bonus points if the target knows somehow that the replication is done, so
it can do it's own thing based on that (like initiating a cascaded
replication).

Cheers,

Fred

Seeing, contrary to popular wisdom, isn’t believing. It’s where belief
stops, because it isn’t needed any more.. (Terry Pratchett)


On Mon, Oct 28, 2013 at 7:38 PM, Paul Cuzner <pcuzner at redhat.com> wrote:

>
> Thanks for responding on this.
>
> As far as snapshot schedules are concerned, I'd recommend that the
> definition of a schedule is separate from the snapshot and then the
> snapshot is associated with a schedule. This then enables
> - schedules to be centralised and used for other functions
> - schedules to be reused across volumes
> - schedules to be regarded as a "policy" and applied across multiple
> clusters, potentially by RHS-C - driving site standards and consistency.
>
> Cheers,
>
> PC
>
> ----- Original Message -----
> > From: "Nagaprasad Sathyanarayana" <nsathyan at redhat.com>
> > To: "Fred van Zwieten" <fvzwieten at vxcompany.com>, "Paul Cuzner" <
> pcuzner at redhat.com>
> > Cc: "Shishir Gowda" <sgowda at redhat.com>, "Anand Subramanian" <
> ansubram at redhat.com>
> > Sent: Tuesday, 29 October, 2013 6:15:19 AM
> > Subject: Re: [Gluster-devel] Fwd: Snapshot feature design review
> >
> > Hi Paul, Fred,
> >
> > Thank you for providing valuable inputs. We shall certainly go through
> these
> > and update you further.
> >
> > Regards
> > Nagaprasad
> >
> >
> > > On 28-Oct-2013, at 12:56 pm, Fred van Zwieten <fvzwieten at vxcompany.com
> >
> > > wrote:
> > >
> > > Hi,
> > >
> > > I have almost the same things as Paul mentioned. I also would like to
> see a
> > > snap retention feature. This could be build into the scheduling
> mechanism.
> > > Something like this:
> > >
> > > gluster snapshot create < vol-name > [-n snap-name][-d description][-s
> > > <name>:<start-datetime>:<delta-datetime>:<keep> ...]
> > >
> > > Where:
> > > <name> is the name of this schedule
> > > <start-datetime> is the timestamp for the first snapshot
> > > <delta-datetime> is the specification for the time between snapshots
> > > <keep> is the specification for the nr of snapshots for this schedule
> to
> > > keep
> > >
> > > Multiple schedules should be possible.
> > >
> > > Another thing, concerning the space management of snapshots. There
> should
> > > be an absolute max size limit on volume plus all of it's snap's. Look
> at
> > > NetApp's implementation for inspiration.
> > >
> > > Cheers,
> > >
> > > Fred
> > >
> > >
> > >> On Mon, Oct 28, 2013 at 4:26 AM, Paul Cuzner <pcuzner at redhat.com>
> wrote:
> > >>
> > >> Hi,
> > >>
> > >> I've just reviewed the doc, and would like to clarify a couple of
> things
> > >> regarding the proposed design.
> > >>
> > >>
> > >> - I don't see a snapshot schedule type command to generate automated
> > >> snapshots. What's the plan here? In a distributed environment the
> > >> schedule for snapshots should be an attribute of the volume shouldn't
> it?
> > >> If we designate a node in the cluster as 'master' and use cron to
> manage
> > >> the snaps - what happens when this node is down/rebuilt or loses its
> > >> config? To me there seems to be a requirement for a gluster scheduler
> -
> > >> to manage snapshots, and potentially future tasks like post dedupe,
> data
> > >> integrity checking or maybe even geo-rep intervals etc.
> > >>
> > >> - snapshots are reliant upon dm-thinp, which means this version of
> lvm is
> > >> a dependancy. Is there a clear path of migrating from classic lvm to
> > >> dm-thinp based lv's - or is snapshots in 3.5 basically going to be a
> > >> feature from this point forward i.e. no backwards compatibility.
> > >>
> > >> - when managing volumes holding snaps, visibility of capacity usage
> > >> attributed to snaps is key - but I don't see a means of discerning the
> > >> space usage by snap in the CLI breakdown.
> > >>
> > >> - on other systems, I've had hung backup tasks (for days!) holding on
> to
> > >> snaps causing space usage to climb against the primary volume. In this
> > >> scenario I was able to see snap usage and what client had the snapshot
> > >> open to troubleshoot. In this scenario, how will the glusterfs
> snapshot
> > >> present itself and be managed.
> > >>
> > >> - How will the snapshot volume be perceived by Windows clients over
> SMB?
> > >> Will these users be able to use the previous versions tab for example
> > >> against the file properties in explorer?
> > >>
> > >> - a volume snapshot is based on snaps of the component bricks. 3.4
> changed
> > >> the way that bricks are used on a vol create to require a dir on a
> > >> filesystem not the filesystem itself. This change enables users to
> create
> > >> multiple volumes from the same physical brick by placing different
> dirs
> > >> on the bricks root - which is not necessarily a good idea. Given the
> 1:1
> > >> requirement of brick:volume, will this CLI behaviour be regressed to
> the
> > >> way it was with 3.3.
> > >>
> > >> Happy to talk further about any of the above, if needed.
> > >>
> > >> Regards,
> > >>
> > >> Paul Cuzner
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> ----- Original Message -----
> > >> > From: "Nagaprasad Sathyanarayana" <nsathyan at redhat.com>
> > >> > To: gluster-devel at nongnu.org
> > >> > Sent: Friday, 18 October, 2013 5:22:31 AM
> > >> > Subject: [Gluster-devel] Fwd: Snapshot feature design review
> > >> >
> > >> > Gluster devel included.
> > >> >
> > >> > Thanks
> > >> > Naga
> > >> >
> > >> > Begin forwarded message:
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > From: Nagaprasad Sathyanarayana < nsathyan at redhat.com >
> > >> > Date: 17 October 2013 9:45:05 pm IST
> > >> > To: Shishir Gowda < sgowda at redhat.com >
> > >> > Cc: " anands at redhat.com " < anands at redhat.com >, "
> rfortier at redhat.com "
> > >> > <
> > >> > rfortier at redhat.com >, " ssaha at redhat.com " < ssaha at redhat.com >, "
> > >> > aavati at redhat.com " < aavati at redhat.com >, " atumball at redhat.com "
> <
> > >> > atumball at redhat.com >, " vbellur at redhat.com " < vbellur at redhat.com>, "
> > >> > vraman at redhat.com " < vraman at redhat.com >, " lpabon at redhat.com " <
> > >> > lpabon at redhat.com >, " kkeithle at redhat.com " < kkeithle at redhat.com>, "
> > >> > jdarcy at redhat.com " < jdarcy at redhat.com >, "
> gluster-devel at redhat.com "
> > >> > <
> > >> > gluster-devel at redhat.com >
> > >> > Subject: Re: Snapshot feature design review
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > + Gluster devel.
> > >> >
> > >> > Hi all,
> > >> >
> > >> > Kindly review the design and provide any comments by next week. We
> are
> > >> > targeting to have the review comments incorporated in the design and
> > >> > post
> > >> > the final design by 28th of this month (October). If you need any
> > >> > discussion
> > >> > on the design, please let us know by 21st or 22nd this month.
> > >> > If anybody not copied must be involved in design review, please feel
> > >> > free to
> > >> > forward the design document to them.
> > >> >
> > >> > Thanks
> > >> > Naga
> > >> >
> > >> >
> > >> >
> > >> > On 16-Oct-2013, at 7:03 pm, Shishir Gowda < sgowda at redhat.com >
> wrote:
> > >> >
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > Hi All,
> > >> >
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > The design document has been updated, and we have tried to address
> all
> > >> > the
> > >> > review comments and design issues to the best of our ability.
> > >> >
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > Please review the design and the document when possible.
> > >> >
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > The design document can be found @
> > >> > https://forge.gluster.org/snapshot/pages/Home
> > >> >
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > Please feel free to critique/comment.
> > >> >
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > With regards,
> > >> >
> > >> >
> > >> > Shishir
> > >> >
> > >> > _______________________________________________
> > >> > Gluster-devel mailing list
> > >> > Gluster-devel at nongnu.org
> > >> > https://lists.nongnu.org/mailman/listinfo/gluster-devel
> > >> >
> > >>
> > >> _______________________________________________
> > >> Gluster-devel mailing list
> > >> Gluster-devel at nongnu.org
> > >> https://lists.nongnu.org/mailman/listinfo/gluster-devel
> > >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20131029/5fb6c830/attachment-0001.html>


More information about the Gluster-devel mailing list