[Gluster-users] gluster volume snap shot - basic questions

Merlin Morgenstern merlin.morgenstern at gmail.com
Wed Sep 2 14:57:40 UTC 2015


Just double checked for the location of the snapshot files.

Documentations says they should be here:

A directory named snap will be created under the vol directory
(..../glusterd/vols/<volname>/snap). Under which each created snap
will be a self contained directory with meta files, and snap volumes

http://www.gluster.org/community/documentation/index.php/Features/snapshot

Unfortunatelly they are not, they are in /var/lib/glusterd/snaps/

Each snap has a directory with volumes inside.

If I want to use the dd command, which volume should I backup?

ls:

node1:/data/mysql/data$ ll
/var/lib/glusterd/snaps/snap1/2d828e6282964e0e89616b297130aa1b/

total 56

drwxr-xr-x 3 root root 4096 Sep  2 16:04 *.*/

drwxr-xr-x 4 root root 4096 Sep  2 16:04 *..*/

-rw------- 1 root root 4559 Sep  2 16:03
2d828e6282964e0e89616b297130aa1b.gs1.run-gluster-snaps-2d828e6282964e0e89616b297130aa1b-brick1-brick1.vol

-rw------- 1 root root 4559 Sep  2 16:03
2d828e6282964e0e89616b297130aa1b.gs2.run-gluster-snaps-2d828e6282964e0e89616b297130aa1b-brick2-brick1.vol

-rw------- 1 root root 2250 Sep  2 16:03
2d828e6282964e0e89616b297130aa1b-rebalance.vol

-rw------- 1 root root 2250 Sep  2 16:03
2d828e6282964e0e89616b297130aa1b.tcp-fuse.vol

drwxr-xr-x 2 root root 4096 Sep  2 16:04 *bricks*/

-rw------- 1 root root   16 Sep  2 16:04 cksum

-rw------- 1 root root  587 Sep  2 16:04 info

-rw------- 1 root root   93 Sep  2 16:04 node_state.info

-rw------- 1 root root    0 Sep  2 16:03 quota.conf

-rw------- 1 root root   13 Sep  2 16:04 snapd.info

-rw------- 1 root root 2478 Sep  2 16:03
trusted-2d828e6282964e0e89616b297130aa1b.tcp-fuse.vol



2015-09-02 16:31 GMT+02:00 Merlin Morgenstern <merlin.morgenstern at gmail.com>
:

> So what would be the fastest possible way to make a backup to one single
> fileof the entire file system? Would this be probably dd?
>
> e.g.:
> sudo umount /run/gluster/snaps/7cb4b2c8f8a64ceaba62bc4ca6cd76b2/brick1
>
> sudo dd if=/dev/mapper/gluster-506cb09085b2428e9daca8ac0857c2c9_0 | gzip
> > snap01.gz
>
> That seems to work, but how could I possibly know the snapshot name? I
> took this info here from df -h since the snapshot can not be found under
> /snaps/snapshot_name
>
> I also tried to run the command you mentioned:
>
> > to mount snapshot volume:
> > mount -t glusterfs <hostname>:/snaps/<snap-name>/<origin-volname>
> /<mount_point>
>
> This did not work. There seems not to be any folder called /snaps/ as when
> I press tab I get suggestion for vol1 but nothing else.
>
> Here is the mount log:
>
> E [MSGID: 114058] [client-handshake.c:1524:client_query_portmap_cbk]
> 0-vol1-client-0: failed to get the port number for remote subvolume. Please
> run 'gluster volume status' on server to see if brick process is running.
>
> Thank you in advance for any help
>
>
>
> 2015-09-02 14:11 GMT+02:00 Rajesh Joseph <rjoseph at redhat.com>:
>
>>
>>
>> ----- Original Message -----
>> > From: "Merlin Morgenstern" <merlin.morgenstern at gmail.com>
>> > To: "Rajesh Joseph" <rjoseph at redhat.com>
>> > Cc: "gluster-users" <gluster-users at gluster.org>
>> > Sent: Wednesday, September 2, 2015 11:53:05 AM
>> > Subject: Re: [Gluster-users] gluster volume snap shot - basic questions
>> >
>> > Thank you Rjesh for your help. I have a thinly provisioned LVM now
>> running
>> > and can create snapshots on a real device, surviving boot.
>> >
>> > There are 2 other questions rising up now.
>> >
>> > 1. I have a LV with 20G, data is 7G. How is it possible, that I could
>> make
>> > 3 snapshots, each 7G?
>> >
>> > /dev/mapper/gluster-thinv1                               20G  7.0G   12G
>> > 38% /bricks/brick1
>> >
>> > /dev/mapper/gluster-7cb4b2c8f8a64ceaba62bc4ca6cd76b2_0   20G  7.0G   12G
>> > 38% /run/gluster/snaps/7cb4b2c8f8a64ceaba62bc4ca6cd76b2/brick1
>> >
>> > /dev/mapper/gluster-506cb09085b2428e9daca8ac0857c2c9_0   20G  7.0G   12G
>> > 38% /run/gluster/snaps/506cb09085b2428e9daca8ac0857c2c9/brick1
>> >
>> > /dev/mapper/gluster-fbee900c1cc7407f9527f98206e6566d_0   20G  7.0G   12G
>> > 38% /run/gluster/snaps/fbee900c1cc7407f9527f98206e6566d/brick1
>> >
>> > /dev/mapper/gluster-d0c254908dca451d8f566be77437c538_0   20G  7.0G   12G
>> > 38% /run/gluster/snaps/d0c254908dca451d8f566be77437c538/brick1
>> >
>> >
>>
>> These snapshots are copy-on-write (COW) therefore they hardly consume any
>> space.
>> As your main volume change the space consumption of the snapshots also
>> grow.
>> Check the "lvs" command to see the actual snapshot space consumption.
>>
>> You can get more detailed information if you search for thinly
>> provisioned LVM and snapshots.
>>
>>
>> > 2. The name of the snapshot folder is the UUID, My plan is to do a "tar
>> cf"
>> > on the snapshot and even incremental tars. Therefore I would need the
>> name
>> > of the folder. How could I pass that name to my bash script in order to
>> > make a backup of the last snap?
>> >
>>
>> Instead of taking per brick backup you can think of taking backup of the
>> entire snapshot
>> volume. You can mount the snapshot volume and perform the backup. Use the
>> following command
>> to mount snapshot volume:
>> mount -t glusterfs <hostname>:/snaps/<snap-name>/<origin-volname>
>> /<mount_point>
>>
>> or else if you want to find the name of the snapshot volume (UUID) then
>> run the
>> following command
>> gluster snapshot info
>>
>> >
>> > 3. A tar process will take hours on the million files I have. I
>> understand
>> > this is a snapshot, is there a way to backup a "single" snapshot file
>> > instead?
>>
>> Snapshot is maintained in the underlying file-system and I see no way of
>> transferring
>> it to another system.
>>
>> >
>> > Thank you in advance for sheding some light on this topic
>> >
>> > 2015-09-02 7:59 GMT+02:00 Rajesh Joseph <rjoseph at redhat.com>:
>> >
>> > >
>> > >
>> > > ----- Original Message -----
>> > > > From: "Merlin Morgenstern" <merlin.morgenstern at gmail.com>
>> > > > To: "gluster-users" <gluster-users at gluster.org>
>> > > > Sent: Tuesday, September 1, 2015 3:15:43 PM
>> > > > Subject: [Gluster-users] gluster volume snap shot - basic questions
>> > > >
>> > > > Hi everybody,
>> > > >
>> > > > I am looking into the snap shot tool, following this tutorial:
>> > > > http://blog.gluster.org/2014/10/gluster-volume-snapshot-howto/
>> > > >
>> > > > While having successfully created the LVM, gluster volume and one
>> > > snapshot,
>> > > > there are some questions arrising where I was hoping to find some
>> > > guidence
>> > > > here:
>> > > >
>> > > > 1. From a working setup as in the example I rebooted and everything
>> was
>> > > gone.
>> > > > How can I make this setup persistent, so the gluster share is up and
>> > > running
>> > > > after boot.
>> > > >
>> > >
>> > > What do you mean by "everything was gone"? Are you using loop back
>> device
>> > > as disks?
>> > > If yes then this is expected. Loop back device mapping is gone after
>> > > machine restart.
>> > > You should test with real disk or lvm partition.
>> > >
>> > > > 2. I understand that the snaps are under /var/run/gluster/snaps/
>> and I
>> > > found
>> > > > them there. Is it save to simply copy them to another server for
>> backup?
>> > > My
>> > > > goal is to create a backup each day and transfer the snaps to an
>> > > FTP-Server
>> > > > in order to be able to recover from a broken machine.
>> > > >
>> > >
>> > > Yes, snap of individual bricks are mounted at
>> /var/run/gluster/snaps/. I
>> > > am assuming
>> > > that you mean copy of data hosted on the snap brick when you say copy
>> the
>> > > snap.
>> > > Are you planning to use some backup software or to run rsync on each
>> brick?
>> > >
>> > > > 3. Do I really need LVM to use this feature? Currently my setup
>> works on
>> > > the
>> > > > native system. As I understand the tuturial I would need to move
>> that to
>> > > a
>> > > > LV, right?
>> > > >
>> > >
>> > > Yes, you need LVM and to be precise thinly provisioned LVM for
>> snapshot to
>> > > work.
>> > >
>> > > > Thank you in advance on any help!
>> > > >
>> > > > _______________________________________________
>> > > > Gluster-users mailing list
>> > > > Gluster-users at gluster.org
>> > > > http://www.gluster.org/mailman/listinfo/gluster-users
>> > >
>> >
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150902/cffcb0ad/attachment.html>


More information about the Gluster-users mailing list