[Gluster-users] confused about replicated volumes and sparse files
Alastair Neil
ajneil.tech at gmail.com
Thu Feb 20 18:32:54 UTC 2014
I am trying to understand how verify that a replicated volume is up to
date.
Here is my scenario. I have a gluster cluster with two nodes serving vm
images to ovirt.
I have a volume called vm-store with a brick from each of the nodes:
Volume Name: vm-store
> Type: Replicate
> Volume ID: 379e52d3-2622-4834-8aef-b255db1c67af
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/export/brick0
> Brick2: gluster0:/export/brick0
> Options Reconfigured:
> user.cifs: disable
> nfs.rpc-auth-allow: *
> auth.allow: *
> storage.owner-gid: 36
> storage.owner-uid: 36
The bricks are formated with xfs using the same options on both servers and
the two servers are identical hardware and OS version and release (CentOS
6.5) with glusterfs v 3.4.2 from bits.gluster.org.
I have a 20GB sparse disk image for a VM but I am confused about why I see
differend reported disk usage on each of the nodes:
[root at gluster0 ~]# du -sh /export/brick0
> 48G /export/brick0
> [root at gluster0 ~]# du -sh
> /export/brick0/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/images/5dfc7c6f-d35d-4831-b2fb-ed9ab8e3392b/5933a44e-77d6-4606-b6a9-bbf7e4235b13
> 8.6G
> /export/brick0/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/images/5dfc7c6f-d35d-4831-b2fb-ed9ab8e3392b/5933a44e-77d6-4606-b6a9-bbf7e4235b13
> [root at gluster1 ~]# du -sh /export/brick0
> 52G /export/brick0
> [root at gluster1 ~]# du -sh
> /export/brick0/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/images/5dfc7c6f-d35d-4831-b2fb-ed9ab8e3392b/5933a44e-77d6-4606-b6a9-bbf7e4235b13
> 12G
> /export/brick0/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/images/5dfc7c6f-d35d-4831-b2fb-ed9ab8e3392b/5933a44e-77d6-4606-b6a9-bbf7e4235b13
sure enough stat also shows different number of blocks:
[root at gluster0 ~]# stat
> /export/brick0/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/images/5dfc7c6f-d35d-4831-b2fb-ed9ab8e3392b/5933a44e-77d6-4606-b6a9-bbf7e4235b13
> File:
> `/export/brick0/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/images/5dfc7c6f-d35d-4831-b2fb-ed9ab8e3392b/5933a44e-77d6-4606-b6a9-bbf7e4235b13'
> Size: 21474836480 Blocks: 17927384 IO Block: 4096 regular file
> Device: fd03h/64771d Inode: 1610613256 Links: 2
> Access: (0660/-rw-rw----) Uid: ( 36/ vdsm) Gid: ( 36/ kvm)
> Access: 2014-02-18 17:06:30.661993000 -0500
> Modify: 2014-02-20 13:29:33.507966199 -0500
> Change: 2014-02-20 13:29:33.507966199 -0500
> [root at gluster1 ~]# stat
> /export/brick0/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/images/5dfc7c6f-d35d-4831-b2fb-ed9ab8e3392b/5933a44e-77d6-4606-b6a9-bbf7e4235b13
> File:
> `/export/brick0/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/images/5dfc7c6f-d35d-4831-b2fb-ed9ab8e3392b/5933a44e-77d6-4606-b6a9-bbf7e4235b13'
> Size: 21474836480 Blocks: 24735976 IO Block: 4096 regular file
> Device: fd03h/64771d Inode: 3758096942 Links: 2
> Access: (0660/-rw-rw----) Uid: ( 36/ vdsm) Gid: ( 36/ kvm)
> Access: 2014-02-20 09:30:38.490724245 -0500
> Modify: 2014-02-20 13:29:39.464913739 -0500
> Change: 2014-02-20 13:29:39.465913754 -0500
Can someone clear up my understanding?
Thanks, Alastair
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140220/7eecf23d/attachment.html>
More information about the Gluster-users
mailing list