[Gluster-users] gluster0:group1 not matching up with mounted directory

Niels de Vos ndevos at redhat.com
Tue Oct 18 08:28:29 UTC 2016


On Tue, Oct 18, 2016 at 04:57:29AM +0000, Cory Sanders wrote:
> I have volumes set up like this:
> gluster> volume info
> 
> Volume Name: machines0
> Type: Distribute
> Volume ID: f602dd45-ddab-4474-8308-d278768f1e00
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: gluster4:/data/brick1/machines0
> 
> Volume Name: group1
> Type: Distribute
> Volume ID: cb64c8de-1f76-46c8-8136-8917b1618939
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/data/brick1/group1
> 
> Volume Name: backups
> Type: Replicate
> Volume ID: d7cb93c4-4626-46fd-b638-65fd244775ae
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: gluster3:/data/brick1/backups
> Brick2: gluster4:/data/brick1/backups
> 
> Volume Name: group0
> Type: Distribute
> Volume ID: 0c52b522-5b04-480c-a058-d863df9ee949
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: gluster0:/data/brick1/group0
> 
> My problem is that when I do a disk free, group1 is filled up:
> 
> root at node0:~# df -h
> Filesystem              Size  Used Avail Use% Mounted on
> udev                     10M     0   10M   0% /dev
> tmpfs                   3.2G  492K  3.2G   1% /run
> /dev/mapper/pve-root     24G   12G   11G  52% /
> tmpfs                   5.0M     0  5.0M   0% /run/lock
> tmpfs                   6.3G   56M  6.3G   1% /run/shm
> /dev/mapper/pve-data     48G  913M   48G   2% /var/lib/vz
> /dev/sda1               495M  223M  248M  48% /boot
> /dev/sdb1               740G  382G  359G  52% /data/brick1
> /dev/fuse                30M   64K   30M   1% /etc/pve
> gluster0:group0         740G  382G  359G  52% /mnt/pve/group0
> 16.xx.xx.137:backups  1.9T  1.6T  233G  88% /mnt/pve/backups
> node4:machines0         7.3T  5.1T  2.3T  70% /mnt/pve/machines0
> gluster0:group1         740G  643G   98G  87% /mnt/pve/group1
> gluster2:/var/lib/vz    1.7T  182G  1.5T  11% /mnt/pve/node2local
> 
> When I do a du -h in the respective directories, this is what I get.
> They don't match up with what a df -h shows.  Gluster0:group0 shows
> the right amount of disk free, but gluster0:group1 is too fat and does
> not correspond to what is in /mnt/pve/group1

du and df work a little different:
 - du: crawl the directory structure and calculate the size
 - df: call the statfs() function that resturns information directly
       from the (superblock of the) filesystem

This means, that all 'df' calls are routed to the bricks that are used
for the Gluster volume. Those bricks then call statfs() on behalf of the
Gluster client (fuse mountpoint), and the Gluster client uses the values
returned by the bricks to calculate the 'fake' output for 'df'.

Now, on your environment you seem to have the RAID1 filesystem mounted
on /data/brick1 (/dev/sdb1 in the above 'df' output). All of the bricks
are also located under /data/brick1/<volume>. This means that all 'df'
commands will execute statfs() on the same filesystem hosting all of the
bricks. Because statfs() returns the statistics over the whole
filesystem (/data/brick1), the used and available size of /data/brick1
will be used in the calculations by the Gluster client to return the
statistics to 'df'.

With this understanding, you should be able to verify the size of the
filesystems used for the bricks, and combine them per Gluster volume.
Any of the /data/brick1 filesystems that host multiple bricks will
likely have an 'unexpected' difference in available/used size.

HTH,
Niels
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: not available
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161018/07702549/attachment.sig>


More information about the Gluster-users mailing list