[Gluster-users] gluster volume snap shot - basic questions
Rajesh Joseph
rjoseph at redhat.com
Fri Sep 4 03:16:34 UTC 2015
----- Original Message -----
> From: "Merlin Morgenstern" <merlin.morgenstern at gmail.com>
> To: "Rajesh Joseph" <rjoseph at redhat.com>
> Cc: "gluster-users" <gluster-users at gluster.org>
> Sent: Thursday, September 3, 2015 6:55:49 PM
> Subject: Re: [Gluster-users] gluster volume snap shot - basic questions
>
> I would rather stay with DD and create an image for backup.
>
> The snapshot is already active, but can not be mounted:
>
> $ sudo gluster snapshot info snap1
> Snapshot : snap1
> Snap UUID : 2788e974-514b-4337-b41a-54b9cb5b0699
> Created : 2015-09-02 14:03:59
> Snap Volumes:
>
> Snap Volume Name : 2d828e6282964e0e89616b297130aa1b
> Origin Volume name : vol1
> Snaps taken for vol1 : 2
> Snaps available for vol1 : 254
> Status : Started
>
> $ sudo mount gs1:/snaps/snap1/vol1 /mnt/external/
>
> mount.nfs: mounting gs1:/snaps/snap1/vol1 failed, reason given by server:
> No such file or directory
>
> $ sudo mount gs1:/snaps/2d828e6282964e0e89616b297130aa1b/vol1 /mnt/external/
>
> mount.nfs: mounting gs1:/snaps/2d828e6282964e0e89616b297130aa1b/vol1
> failed, reason given by server: No such file or directory
You are doing an NFS mount of snap volume which is not supported. Currently only
Gluster (fuse) mount is supported for mounting snap.
>
> Also when taking snapshots with DD I need to know the name of the mount
> source.
>
> Gluster automounts the files with the following mount source:
>
> /dev/mapper/gluster-2d828e6282964e0e89616b297130aa1b_0
>
> If I know that name, I could do the DD:
Did you tried using gluster snapshot info command? It will give you the snap volume
name (UUID).
>
> $ sudo dd if=/dev/mapper/gluster-d0c254908dca451d8f566be77437c538_0 | gzip
> > snap1.gz
>
> 41738240+0 records in
> 41738240+0 records out
> 21369978880 bytes (21 GB) copied, 401.596 s, 53.2 MB/s
>
> How would I know the name of the mount source to mount?
>
> I want to run this by cron. E.g.
>
> gluster snapshot create snap1 vol1 no-timestamp
> dd if=/dev/mapper/gluster-snap1 | gzip > snap1.gz
> ftp ...
>
> Thank you in advance for sheding some light on doing backups from glusterfs.
>
>
>
>
>
>
>
>
>
>
>
> 2015-09-03 13:21 GMT+02:00 Rajesh Joseph <rjoseph at redhat.com>:
>
> >
> >
> > ----- Original Message -----
> > > From: "Merlin Morgenstern" <merlin.morgenstern at gmail.com>
> > > To: "Rajesh Joseph" <rjoseph at redhat.com>
> > > Cc: "gluster-users" <gluster-users at gluster.org>
> > > Sent: Wednesday, September 2, 2015 8:27:40 PM
> > > Subject: Re: [Gluster-users] gluster volume snap shot - basic questions
> > >
> > > Just double checked for the location of the snapshot files.
> > >
> > > Documentations says they should be here:
> > >
> > > A directory named snap will be created under the vol directory
> > > (..../glusterd/vols/<volname>/snap). Under which each created snap
> > > will be a self contained directory with meta files, and snap volumes
> > >
> > >
> > http://www.gluster.org/community/documentation/index.php/Features/snapshot
> > >
> >
> > The above link is little out-dated. Checkout the following links
> >
> > http://www.gluster.org/community/documentation/index.php/Features/Gluster_Volume_Snapshot
> > http://rajesh-joseph.blogspot.in/p/gluster-volume-snapshot-howto.html
> >
> > > Unfortunatelly they are not, they are in /var/lib/glusterd/snaps/
> > >
> > > Each snap has a directory with volumes inside.
> > >
> >
> > Snapshot of a Gluster volume creates point-in-time copy of the entire
> > volume. That's why
> > the snapshot of a Gluster volume is kind of a Gluster volume. As any
> > regular Gluster Volume
> > snapshot also has data and volume config files associated with it.
> >
> > The config files for snapshot is stored at
> > /var/lib/glusterd/snaps/<snapname> directory.
> > The actual data (bricks) is stored in individual LVMs which are mounted at
> > /var/run/gluster/snaps/<snap-volname>/brick<no>/
> >
> >
> > > If I want to use the dd command, which volume should I backup?
> > >
> >
> > I think this would be very primitive way of backup and might take lot of
> > time doing actual backup.
> > Consider using some open-source backup solutions, e.g. Bareos, etc.
> >
> > If you want to use dd then I suggest to mount the snapshot volume and then
> > do dd on the mount point. Else
> > you need to take backup all bricks separately and handle replicas as well.
> >
> > > ls:
> > >
> > > node1:/data/mysql/data$ ll
> > > /var/lib/glusterd/snaps/snap1/2d828e6282964e0e89616b297130aa1b/
> > >
> > > total 56
> > >
> > > drwxr-xr-x 3 root root 4096 Sep 2 16:04 *.*/
> > >
> > > drwxr-xr-x 4 root root 4096 Sep 2 16:04 *..*/
> > >
> > > -rw------- 1 root root 4559 Sep 2 16:03
> > >
> > 2d828e6282964e0e89616b297130aa1b.gs1.run-gluster-snaps-2d828e6282964e0e89616b297130aa1b-brick1-brick1.vol
> > >
> > > -rw------- 1 root root 4559 Sep 2 16:03
> > >
> > 2d828e6282964e0e89616b297130aa1b.gs2.run-gluster-snaps-2d828e6282964e0e89616b297130aa1b-brick2-brick1.vol
> > >
> > > -rw------- 1 root root 2250 Sep 2 16:03
> > > 2d828e6282964e0e89616b297130aa1b-rebalance.vol
> > >
> > > -rw------- 1 root root 2250 Sep 2 16:03
> > > 2d828e6282964e0e89616b297130aa1b.tcp-fuse.vol
> > >
> > > drwxr-xr-x 2 root root 4096 Sep 2 16:04 *bricks*/
> > >
> > > -rw------- 1 root root 16 Sep 2 16:04 cksum
> > >
> > > -rw------- 1 root root 587 Sep 2 16:04 info
> > >
> > > -rw------- 1 root root 93 Sep 2 16:04 node_state.info
> > >
> > > -rw------- 1 root root 0 Sep 2 16:03 quota.conf
> > >
> > > -rw------- 1 root root 13 Sep 2 16:04 snapd.info
> > >
> > > -rw------- 1 root root 2478 Sep 2 16:03
> > > trusted-2d828e6282964e0e89616b297130aa1b.tcp-fuse.vol
> > >
> > >
> > >
> > > 2015-09-02 16:31 GMT+02:00 Merlin Morgenstern <
> > merlin.morgenstern at gmail.com>
> > > :
> > >
> > > > So what would be the fastest possible way to make a backup to one
> > single
> > > > fileof the entire file system? Would this be probably dd?
> > > >
> > > > e.g.:
> > > > sudo umount /run/gluster/snaps/7cb4b2c8f8a64ceaba62bc4ca6cd76b2/brick1
> > > >
> > > > sudo dd if=/dev/mapper/gluster-506cb09085b2428e9daca8ac0857c2c9_0 |
> > gzip
> > > > > snap01.gz
> > > >
> > > > That seems to work, but how could I possibly know the snapshot name? I
> > > > took this info here from df -h since the snapshot can not be found
> > under
> > > > /snaps/snapshot_name
> > > >
> > > > I also tried to run the command you mentioned:
> > > >
> > > > > to mount snapshot volume:
> > > > > mount -t glusterfs <hostname>:/snaps/<snap-name>/<origin-volname>
> > > > /<mount_point>
> > > >
> > > > This did not work. There seems not to be any folder called /snaps/ as
> > when
> > > > I press tab I get suggestion for vol1 but nothing else.
> > > >
> > > > Here is the mount log:
> > > >
> > > > E [MSGID: 114058] [client-handshake.c:1524:client_query_portmap_cbk]
> > > > 0-vol1-client-0: failed to get the port number for remote subvolume.
> > Please
> > > > run 'gluster volume status' on server to see if brick process is
> > running.
> >
> > By default snapshots are in deactivated state. You must activate them
> > before mounting.
> > Use the following command to do so.
> >
> > gluster snapshot activate <snapname>
> >
> > or use the following config command to activate the snapshot by default.
> > gluster snapshot config activate-on-create enable
> >
> > After the above command all the newly created snapshot will be activated
> > by default.
> >
> > > >
> > > > Thank you in advance for any help
> > > >
> > > >
> > > >
> > > > 2015-09-02 14:11 GMT+02:00 Rajesh Joseph <rjoseph at redhat.com>:
> > > >
> > > >>
> > > >>
> > > >> ----- Original Message -----
> > > >> > From: "Merlin Morgenstern" <merlin.morgenstern at gmail.com>
> > > >> > To: "Rajesh Joseph" <rjoseph at redhat.com>
> > > >> > Cc: "gluster-users" <gluster-users at gluster.org>
> > > >> > Sent: Wednesday, September 2, 2015 11:53:05 AM
> > > >> > Subject: Re: [Gluster-users] gluster volume snap shot - basic
> > questions
> > > >> >
> > > >> > Thank you Rjesh for your help. I have a thinly provisioned LVM now
> > > >> running
> > > >> > and can create snapshots on a real device, surviving boot.
> > > >> >
> > > >> > There are 2 other questions rising up now.
> > > >> >
> > > >> > 1. I have a LV with 20G, data is 7G. How is it possible, that I
> > could
> > > >> make
> > > >> > 3 snapshots, each 7G?
> > > >> >
> > > >> > /dev/mapper/gluster-thinv1 20G 7.0G
> > 12G
> > > >> > 38% /bricks/brick1
> > > >> >
> > > >> > /dev/mapper/gluster-7cb4b2c8f8a64ceaba62bc4ca6cd76b2_0 20G 7.0G
> > 12G
> > > >> > 38% /run/gluster/snaps/7cb4b2c8f8a64ceaba62bc4ca6cd76b2/brick1
> > > >> >
> > > >> > /dev/mapper/gluster-506cb09085b2428e9daca8ac0857c2c9_0 20G 7.0G
> > 12G
> > > >> > 38% /run/gluster/snaps/506cb09085b2428e9daca8ac0857c2c9/brick1
> > > >> >
> > > >> > /dev/mapper/gluster-fbee900c1cc7407f9527f98206e6566d_0 20G 7.0G
> > 12G
> > > >> > 38% /run/gluster/snaps/fbee900c1cc7407f9527f98206e6566d/brick1
> > > >> >
> > > >> > /dev/mapper/gluster-d0c254908dca451d8f566be77437c538_0 20G 7.0G
> > 12G
> > > >> > 38% /run/gluster/snaps/d0c254908dca451d8f566be77437c538/brick1
> > > >> >
> > > >> >
> > > >>
> > > >> These snapshots are copy-on-write (COW) therefore they hardly consume
> > any
> > > >> space.
> > > >> As your main volume change the space consumption of the snapshots also
> > > >> grow.
> > > >> Check the "lvs" command to see the actual snapshot space consumption.
> > > >>
> > > >> You can get more detailed information if you search for thinly
> > > >> provisioned LVM and snapshots.
> > > >>
> > > >>
> > > >> > 2. The name of the snapshot folder is the UUID, My plan is to do a
> > "tar
> > > >> cf"
> > > >> > on the snapshot and even incremental tars. Therefore I would need
> > the
> > > >> name
> > > >> > of the folder. How could I pass that name to my bash script in
> > order to
> > > >> > make a backup of the last snap?
> > > >> >
> > > >>
> > > >> Instead of taking per brick backup you can think of taking backup of
> > the
> > > >> entire snapshot
> > > >> volume. You can mount the snapshot volume and perform the backup. Use
> > the
> > > >> following command
> > > >> to mount snapshot volume:
> > > >> mount -t glusterfs <hostname>:/snaps/<snap-name>/<origin-volname>
> > > >> /<mount_point>
> > > >>
> > > >> or else if you want to find the name of the snapshot volume (UUID)
> > then
> > > >> run the
> > > >> following command
> > > >> gluster snapshot info
> > > >>
> > > >> >
> > > >> > 3. A tar process will take hours on the million files I have. I
> > > >> understand
> > > >> > this is a snapshot, is there a way to backup a "single" snapshot
> > file
> > > >> > instead?
> > > >>
> > > >> Snapshot is maintained in the underlying file-system and I see no way
> > of
> > > >> transferring
> > > >> it to another system.
> > > >>
> > > >> >
> > > >> > Thank you in advance for sheding some light on this topic
> > > >> >
> > > >> > 2015-09-02 7:59 GMT+02:00 Rajesh Joseph <rjoseph at redhat.com>:
> > > >> >
> > > >> > >
> > > >> > >
> > > >> > > ----- Original Message -----
> > > >> > > > From: "Merlin Morgenstern" <merlin.morgenstern at gmail.com>
> > > >> > > > To: "gluster-users" <gluster-users at gluster.org>
> > > >> > > > Sent: Tuesday, September 1, 2015 3:15:43 PM
> > > >> > > > Subject: [Gluster-users] gluster volume snap shot - basic
> > questions
> > > >> > > >
> > > >> > > > Hi everybody,
> > > >> > > >
> > > >> > > > I am looking into the snap shot tool, following this tutorial:
> > > >> > > > http://blog.gluster.org/2014/10/gluster-volume-snapshot-howto/
> > > >> > > >
> > > >> > > > While having successfully created the LVM, gluster volume and
> > one
> > > >> > > snapshot,
> > > >> > > > there are some questions arrising where I was hoping to find
> > some
> > > >> > > guidence
> > > >> > > > here:
> > > >> > > >
> > > >> > > > 1. From a working setup as in the example I rebooted and
> > everything
> > > >> was
> > > >> > > gone.
> > > >> > > > How can I make this setup persistent, so the gluster share is
> > up and
> > > >> > > running
> > > >> > > > after boot.
> > > >> > > >
> > > >> > >
> > > >> > > What do you mean by "everything was gone"? Are you using loop back
> > > >> device
> > > >> > > as disks?
> > > >> > > If yes then this is expected. Loop back device mapping is gone
> > after
> > > >> > > machine restart.
> > > >> > > You should test with real disk or lvm partition.
> > > >> > >
> > > >> > > > 2. I understand that the snaps are under /var/run/gluster/snaps/
> > > >> and I
> > > >> > > found
> > > >> > > > them there. Is it save to simply copy them to another server for
> > > >> backup?
> > > >> > > My
> > > >> > > > goal is to create a backup each day and transfer the snaps to an
> > > >> > > FTP-Server
> > > >> > > > in order to be able to recover from a broken machine.
> > > >> > > >
> > > >> > >
> > > >> > > Yes, snap of individual bricks are mounted at
> > > >> /var/run/gluster/snaps/. I
> > > >> > > am assuming
> > > >> > > that you mean copy of data hosted on the snap brick when you say
> > copy
> > > >> the
> > > >> > > snap.
> > > >> > > Are you planning to use some backup software or to run rsync on
> > each
> > > >> brick?
> > > >> > >
> > > >> > > > 3. Do I really need LVM to use this feature? Currently my setup
> > > >> works on
> > > >> > > the
> > > >> > > > native system. As I understand the tuturial I would need to move
> > > >> that to
> > > >> > > a
> > > >> > > > LV, right?
> > > >> > > >
> > > >> > >
> > > >> > > Yes, you need LVM and to be precise thinly provisioned LVM for
> > > >> snapshot to
> > > >> > > work.
> > > >> > >
> > > >> > > > Thank you in advance on any help!
> > > >> > > >
> > > >> > > > _______________________________________________
> > > >> > > > Gluster-users mailing list
> > > >> > > > Gluster-users at gluster.org
> > > >> > > > http://www.gluster.org/mailman/listinfo/gluster-users
> > > >> > >
> > > >> >
> > > >>
> > > >
> > > >
> > >
> >
>
More information about the Gluster-users
mailing list