[Gluster-users] Bareos backup from Gluster mount

Ryan Clough ryan.clough at dsic.com
Thu Jul 30 03:39:39 UTC 2015


My use case is quite different from yours Michael. In fact, the exact
opposite. We are trying to backup data that is staged in a Gluster volume
and you are trying to backup to a Gluster Volume.

___________________________________________
¯\_(ツ)_/¯
Ryan Clough
Information Systems
Decision Sciences International Corporation
<http://www.decisionsciencescorp.com/>
<http://www.decisionsciencescorp.com/>

On Wed, Jul 29, 2015 at 2:17 PM, Michael Mol <mikemol at gmail.com> wrote:

> On Mon, Jul 27, 2015 at 5:03 PM Ryan Clough <ryan.clough at dsic.com> wrote:
>
>> Hello,
>>
>> I have cross-posted this question in the bareos-users mailing list.
>>
>> Wondering if anyone has tried this because I am unable to backup data
>> that is mounted via Gluster Fuse or Gluster NFS. Basically, I have the
>> Gluster volume mounted on the Bareos Director which also has the tape
>> changer attached.
>>
>> Here is some information about versions:
>> Bareos version 14.2.2
>> Gluster version 3.7.2
>> Scientific Linux version 6.6
>>
>> Our Gluster volume consists of two nodes in distribute only. Here is the
>> configuration of our volume:
>> [root at hgluster02 ~]# gluster volume info
>>
>> Volume Name: export_volume
>> Type: Distribute
>> Volume ID: c74cc970-31e2-4924-a244-4c70d958dadb
>> Status: Started
>> Number of Bricks: 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: hgluster01:/gluster_data
>> Brick2: hgluster02:/gluster_data
>> Options Reconfigured:
>> performance.io-thread-count: 24
>> server.event-threads: 20
>> client.event-threads: 4
>> performance.readdir-ahead: on
>> features.inode-quota: on
>> features.quota: on
>> nfs.disable: off
>> auth.allow: 192.168.10.*,10.0.10.*,10.8.0.*,10.2.0.*,10.0.60.*
>> server.allow-insecure: on
>> server.root-squash: on
>> performance.read-ahead: on
>> features.quota-deem-statfs: on
>> diagnostics.brick-log-level: WARNING
>>
>> When I try to backup a directory from Gluster Fuse or Gluster NFS mount
>> and I monitor the network communication I only see data being pulled from
>> the hgluster01 brick. When the job finishes Bareos thinks that it completed
>> without error but included in the messages for the job are lots and lots of
>> permission denied errors like this:
>> 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:      Cannot open
>> "/export/rclough/psdv-2014-archives-2/scan_111.tar.bak": ERR=Permission
>> denied.
>> 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:      Cannot open
>> "/export/rclough/psdv-2014-archives-2/run_219.tar.bak": ERR=Permission
>> denied.
>> 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:      Cannot open
>> "/export/rclough/psdv-2014-archives-2/scan_112.tar.bak": ERR=Permission
>> denied.
>> 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:      Cannot open
>> "/export/rclough/psdv-2014-archives-2/run_220.tar.bak": ERR=Permission
>> denied.
>> 15-Jul 02:03 ripper.red.dsic.com-fd JobId 613:      Cannot open
>> "/export/rclough/psdv-2014-archives-2/scan_114.tar.bak": ERR=Permission
>> denied.
>>
>> At first I thought this might be a root-squash problem but, if I try to
>> read/copy a file using the root user from the Bareos server that is trying
>> to do the backup, I can read files just fine.
>>
>> When the job finishes is reports that it finished "OK -- with warnings"
>> but, again the log for the job is filled with "ERR=Permission denied"
>> messages. In my opinion, this job did not finish OK and should be Failed.
>> Some of the files from the HGluster02 brick are backed up but all of the
>> ones with permission errors do not. When I restore the job, all of the
>> files with permission errors are empty.
>>
>> Has anyone successfully used Bareos to backup data from Gluster mounts?
>> This is an important use case for us because this is the largest single
>> volume that we have to prepare large amounts of data to be archived.
>>
>> Thank you for your time,
>>
>> How did I not see this earlier? I'm seeing a very similar problem.  I
> just posted this to the bareos-user list:
>
> Help! I've run out of know-how while trying to fix this myself...
>
> Environment: CentOS 7, x86_64
> Bareos version: 14.2.2-46.1.el7 (via
> http://download.bareos.org/bareos/release/14.2/CentOS_7/ repo)
> Gluster version: 3.7.3-1.el7 (via
> http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-$releasever/$basearch/
> repo)
>
> Symptom: Bareos attempts to mount a volume, and spits back a Permission
> Denied error, as though it didn't have permission to access the relevant
> file.
>
> I've been seeing this at least since Gluster version 3.7.2, which I
> updated to owing to a need to expand my backend storage (and 3.7.1, which
> worked fine) had a bug that broke bricks while rebalancing.
>
> I've verified that the bareos storage daemon is running as the bareos
> user, and I've also, by way of FUSE mount into the gluster volume, verified
> ownership of the volume:
>
> # ls -l Email-Incremental-0155
> -rw-r-----. 1 bareos bareos 1073728379 Jun 10 21:04 Email-Incremental-0155
>
> And uid/gid, for reference:
>
> # ls -ln Email-Incremental-0155
> -rw-r-----. 1 997 995 1073728379 Jun 10 21:04 Email-Incremental-0155
>
> And in the gluster volume, the storage owner-{uid,gid}:
> # gluster volume info bareos
>
> Volume Name: bareos
> Type: Distribute
> Volume ID: f4cb7aac-3631-41cc-9afa-f182a514d116
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: backup-stor-1[censored]:/var/gluster/bareos/brick-bareos
> Brick2: backup-stor-2[censored]:/var/gluster/bareos/brick-bareos
> Options Reconfigured:
> server.allow-insecure: on
> performance.readdir-ahead: off
> nfs.disable: on
> performance.cache-size: 128MB
> performance.write-behind-window-size: 256MB
> performance.cache-refresh-timeout: 10
> performance.io-thread-count: 16
> performance.cache-max-file-size: 4TB
> performance.flush-behind: on
> performance.client-io-threads: on
> storage.owner-uid: 997
> storage.owner-gid: 995
> features.bitrot: off
> features.scrub: Inactive
> features.scrub-freq: daily
> features.scrub-throttle: lazy
>
> In this run, the storage daemon and the file daemon happen to be on the
> same node. Here's trace output at level 200, obtained running "tail -f
> *.trace" in bareos-sd's cwd:
>
> ==> backup-director-sd.trace <==
> backup-director-sd: fd_cmds.c:219-0 <filed: append open session
> backup-director-sd: fd_cmds.c:303-0 Append open session: append open
> session
> backup-director-sd: fd_cmds.c:314-0 >filed: 3000 OK open ticket = 1
> backup-director-sd: fd_cmds.c:219-0 <filed: append data 1
> backup-director-sd: fd_cmds.c:265-0 Append data: append data 1
> backup-director-sd: fd_cmds.c:267-0 <filed: append data 1
> backup-director-sd: append.c:69-0 Start append data. res=1
> backup-director-sd: acquire.c:369-0 acquire_append device is disk
> backup-director-sd: acquire.c:404-0 jid=924 Do mount_next_write_vol
> backup-director-sd: mount.c:71-0 Enter mount_next_volume(release=0)
> dev="GlusterStorage4" (gluster://backup-stor-1[censored]/bareos/bareos)
> backup-director-sd: mount.c:84-0 mount_next_vol retry=0
> backup-director-sd: mount.c:604-0 No swap_dev set
> backup-director-sd: askdir.c:246-0 >dird CatReq
> Job=server2-email.2015-07-29_16.32.34_09 GetVolInfo
> VolName=Email-Incremental-0155 write=1
> backup-director-sd: askdir.c:175-0 <dird 1000 OK
> VolName=Email-Incremental-0155 VolJobs=0 VolFiles=0 VolBlocks=0 VolBytes=1
> VolMounts=3 VolErrors=0 VolWrites=16646 MaxVolBytes=1073741824
> VolCapacityBytes=0 VolStatus=Recycle Slot=0 MaxVolJobs=0 MaxVolFiles=0
> InChanger=0 VolReadTime=0 VolWriteTime=8455280 EndFile=0
> EndBlock=1073728378 LabelType=0 MediaId=156 EncryptionKey= MinBlocksize=0
> MaxBlocksize=0
> backup-director-sd: askdir.c:211-0 do_get_volume_info return true slot=0
> Volume=Email-Incremental-0155, VolminBlocksize=0 VolMaxBlocksize=0
> backup-director-sd: askdir.c:213-0 setting dcr->VolMinBlocksize(0) to
> vol.VolMinBlocksize(0)
> backup-director-sd: askdir.c:215-0 setting dcr->VolMaxBlocksize(0) to
> vol.VolMaxBlocksize(0)
> backup-director-sd: mount.c:122-0 After find_next_append.
> Vol=Email-Incremental-0155 Slot=0
> backup-director-sd: autochanger.c:99-0 Device "GlusterStorage4"
> (gluster://backup-stor-1[censored]/bareos/bareos) is not an autochanger
> backup-director-sd: mount.c:144-0 autoload_dev returns 0
> backup-director-sd: mount.c:175-0 want vol=Email-Incremental-0155 devvol=
> dev="GlusterStorage4" (gluster://backup-stor-1[censored]/bareos/bareos)
> backup-director-sd: dev.c:536-0 open dev: type=5
> dev_name="GlusterStorage4"
> (gluster://backup-stor-1[censored]/bareos/bareos)
> vol=Email-Incremental-0155 mode=OPEN_READ_WRITE
> backup-director-sd: dev.c:540-0 call open_device mode=OPEN_READ_WRITE
> backup-director-sd: dev.c:941-0 Enter mount
> backup-director-sd: dev.c:610-0 open disk: mode=OPEN_READ_WRITE
> open(gluster://backup-stor-1[censored]/bareos/bareos/Email-Incremental-0155,
> 0x2, 0640)
>
> ==> backup-director-fd.trace <==
>
> ==> backup-director-sd.trace <==
> backup-director-sd: dev.c:617-0 open failed: dev.c:616 Could not open:
> gluster://backup-stor-1[censored]/bareos/bareos/Email-Incremental-0155,
> ERR=Permission denied
>

-- 
This email and its contents are confidential. If you are not the intended 
recipient, please do not disclose or use the information within this email 
or its attachments. If you have received this email in error, please report 
the error to the sender by return email and delete this communication from 
your records.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150729/ec713f06/attachment.html>


More information about the Gluster-users mailing list