[Gluster-users] internals of snaphshot restore

Avra Sengupta asengupt at redhat.com
Mon Mar 6 06:36:32 UTC 2017


Hi Joe,

Gluster volumes are made of brick processes. Each brick process is 
associated with a particular brick directory. In case of the original 
volume, the brick processes run on the brick directory provided during 
the volume creation. When a snapshot of that volume is taken, it creates 
a gluster snapshot volume, which has it's own bricks which run on 
directories that resemble the one you 
mentioned(/run/gluster/snaps/11efcc850133419991c4614b7cb7189c/brick3/brick). 
This snapshot brick directory is where the lvm snapshot of the original 
brick's lvm is mounted.

On performing a snapshot restore, the volume goes offline because we 
update the volume info file's of the original volume, to make it point 
to the snapshot bricks instead of the original bricks. We also remove 
the snapshot's info files. As a result when the volume is then started, 
after restore it points to the snapshot bricks and the user gets the 
data as it was when the snapshot was taken.

By principle, we do not touch the user created directory as we don't 
claim "jurisdiction" over it. Hence you can still see the older data in 
those backend drectories even after the restore. It is the user's onus, 
as in what to do with the original directory or data. This behavior is 
inherited from the behavior of volume delete, where we take the same 
precautions to make sure that we don't implicitly delete the user 
created directories and data.

However, as after you have restored the volume to a snapshot , it is 
already pointing to snapshot bricks (created by gluster), any subsequent 
restores henceforth will remove the snapshot bricks that are currently a 
part of the volume as these snapshot bricks are created by gluster and 
not by the user. Thanks.

Regards,
Avra

On 03/04/2017 07:21 PM, Joseph Lorenzini wrote:
> Hi all,
>
> Testing out snapshots on gluster and they work great! I have a 
> question about how the snapshot restore works. After I successully 
> restore and start up my volume, the brick directory is not the same
>
> /run/gluster/snaps/11efcc850133419991c4614b7cb7189c/brick3/brick
>
> And if I look in the original brick directory, the old data that 
> predates the restore still exists. This isn't what I would have 
> expected a "restore" to do. Especially since the restore operation 
> requires that a volume be offline, my expectation is that it would 
> overwrite the data in the old brick directory.
>
> Would anyone be able to explain why it doesn't work this way?
>
> Thanks,
> Joe
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170306/4a03b9a6/attachment.html>


More information about the Gluster-users mailing list