[Gluster-users] df shows wrong mount size, after adding bricks to volume
Petr Certik
petr at certik.cz
Wed May 27 09:00:32 UTC 2020
Hi everyone,
we've been running a replicated volume for a while, with three ~1 TB
bricks. Recently we've added three more same-sized bricks, making it a
2 x 3 distributed replicated volume. However, even after rebalance,
the `df` command on a client shows the correct used/size percentage,
but wrong absolute sizes. The size still shows up as ~1 TB while in
reality it should be around 2 TB, and both "used" and "available"
reported sizes are about half of what they should be. The clients were
an old version (5.5), but even after upgrade to 7.2 and remount, the
reported sizes are still wrong. There are no heal entries. What can I
do to fix this?
OS: debian buster everywhere
Server version: 7.3-1, opversion: 70200
Client versions: 5.5-3, 7.6-1, opversions: 50400, 70200
root at imagegluster1:~# gluster volume info gv0
Volume Name: gv0
Type: Distributed-Replicate
Volume ID: 5505d350-9b61-4056-9054-de9dfb58eab7
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: imagegluster1:/data/brick
Brick2: imagegluster2:/data/brick
Brick3: imagegluster3:/data/brick
Brick4: imagegluster1:/data2/brick
Brick5: imagegluster2:/data2/brick
Brick6: imagegluster3:/data2/brick
Options Reconfigured:
features.cache-invalidation: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
root at imagegluster1:~# df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/sdb1 894G 470G 425G 53% /data2
/dev/sdc1 894G 469G 426G 53% /data
root at any-of-the-clients:~# df -h
Filesystem Size Used Avail Use% Mounted on
...
imagegluster:/gv0 894G 478G 416G 54% /mnt/gluster
Let me know if there's any other info I can provide about our setup.
Cheers,
Petr Certik
More information about the Gluster-users
mailing list