[Gluster-users] Distributed-Disperse Shard Behavior
Fox
foxxz.net at gmail.com
Fri Feb 4 20:31:31 UTC 2022
Using gluster v10.1 and creating a Distributed-Dispersed volume with
sharding enabled.
I create a 2gb file on the volume using the 'dd' tool. The file size shows
2gb with 'ls'. However, 'df' shows 4gb of space utilized on the volume.
After several minutes the volume utilization drops to the 2gb I would
expect.
This is repeatable for different large file sizes and different
disperse/redundancy brick configurations.
I've also encountered a situation, as configured above, where I utilize
close to full disk capacity and am momentarily unable to delete the file.
I have attached a command line log of an example of above using a set of
test VMs setup in a glusterfs cluster.
Is this initial 2x space utilization anticipated behavior for sharding?
It would mean that I can never create a file bigger than half my volume
size as I get an I/O error with no space left on disk.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20220204/c6679a3f/attachment.html>
-------------- next part --------------
# gluster volume create gv25 disperse 5 tg{1,2,3,4,5}:/data/brick1/gv25 tg{1,2,3,4,5}:/data/brick2/gv25
volume create: gv25: success: please start the volume to access data
# gluster volume set gv25 features.shard on
volume set: success
# gluster volume info
Volume Name: gv25
Type: Distributed-Disperse
Volume ID: 75e25758-4461-4eef-85f9-6ef030b59b49
Status: Created
Snapshot Count: 0
Number of Bricks: 2 x (4 + 1) = 10
Transport-type: tcp
Bricks:
Brick1: tg1:/data/brick1/gv25
Brick2: tg2:/data/brick1/gv25
Brick3: tg3:/data/brick1/gv25
Brick4: tg4:/data/brick1/gv25
Brick5: tg5:/data/brick1/gv25
Brick6: tg1:/data/brick2/gv25
Brick7: tg2:/data/brick2/gv25
Brick8: tg3:/data/brick2/gv25
Brick9: tg4:/data/brick2/gv25
Brick10: tg5:/data/brick2/gv25
Options Reconfigured:
features.shard: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 1017M 40M 978M 4% /data/brick1
/dev/sdc1 1017M 40M 978M 4% /data/brick2
# gluster volume start gv25
volume start: gv25: success
# mount -t glusterfs tg1:/gv25 /mnt
# cd /mnt
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 1017M 40M 978M 4% /data/brick1
/dev/sdc1 1017M 40M 978M 4% /data/brick2
tg1:/gv25 8.0G 399M 7.6G 5% /mnt
# dd if=/dev/zero of=file bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 28.3823 s, 75.7 MB/s
# date
Fri 04 Feb 2022 07:11:08 PM UTC
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 1017M 742M 276M 73% /data/brick1
/dev/sdc1 1017M 343M 675M 34% /data/brick2
tg1:/gv25 8.0G 4.4G 3.7G 55% /mnt
(Approximately 2 minutes later)
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 1017M 264M 754M 26% /data/brick1
/dev/sdc1 1017M 328M 690M 33% /data/brick2
tg1:/gv25 8.0G 2.4G 5.6G 31% /mnt
# rm file
# dd if=/dev/zero of=file2 bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 55.5572 s, 77.3 MB/s
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 1017M 821M 196M 81% /data/brick1
/dev/sdc1 1017M 789M 229M 78% /data/brick2
tg1:/gv25 8.0G 8.0G 43M 100% /mnt
# rm file2
rm: cannot remove 'file2': No space left on device
# rm file2
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 1017M 40M 978M 4% /data/brick1
/dev/sdc1 1017M 40M 978M 4% /data/brick2
tg1:/gv25 8.0G 399M 7.6G 5% /mnt
# dd if=/dev/zero of=file3 bs=1M count=6000
dd: error writing 'file3': Input/output error
dd: closing output file 'file3': Input/output error
More information about the Gluster-users
mailing list