[Bugs] [Bug 1724754] fallocate of a file larger than brick size leads to increased brick usage despite failure
bugzilla at redhat.com
bugzilla at redhat.com
Thu Jun 27 20:41:26 UTC 2019
https://bugzilla.redhat.com/show_bug.cgi?id=1724754
--- Comment #2 from Raghavendra Bhat <rabhat at redhat.com> ---
Description of problem:
======================
when we use fallocate to create a file which is >= or to the max disk capacity,
while we get a CLI error "fallocate: fallocate failed: No space left on device"
However, the file gets created, and if you check the file size on the mount
shows zero size, but if you check the volume space on the client (df -h) , it
can be seen that the file is occupying significant space, that is because on
the backend bricks the file is created upto size of about 90% of the disk
size( may be because of storage reserve space)
How reproducible:
===================
always
Steps to Reproduce:
1.create a 1x3 volume and fuse mount it
2. use fallocate to create a file which is >=size of of the brick
3. you would get error "fallocate: fallocate failed: No space left on device"
Actual results:
============
however file is created and it shows as zero size file from mount point,
but the file does occupy about 90% of the brick size backend and the same
reflects in df -h of the mount point
>From client:
[root at hostname2]# pwd
/mnt/nfnas/falloc-test
[root at hostname2]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel_dhcp42--60-root 44G 1.6G 43G 4% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 8.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 188M 827M 19% /boot
tmpfs 783M 0 783M 0% /run/user/0
hostname1:nfnas 2.2T 453G 1.8T 21% /mnt/nfnas ====> NOTICE THE USED SIZE OF
Storage space
[root at hostname2]# fallocate test -l 600GB
fallocate: fallocate failed: No space left on device
[root at hostname2]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel_dhcp42--60-root 44G 1.6G 43G 4% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 8.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 188M 827M 19% /boot
tmpfs 783M 0 783M 0% /run/user/0
hostname1:nfnas 2.2T 925G 1.3T 43% /mnt/nfnas ===>NOTICE THE INCREASE IN
USED SPACE
[root at hostname2]# ls
test
[root at dhcp42-60 falloc-test]# du -sh test
0 test
[root at hostname2]# stat test
File: ‘test’
Size: 0 Blocks: 0 IO Block: 131072 regular empty file
Device: 26h/38d Inode: 13717993992350864287 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Context: system_u:object_r:fusefs_t:s0
Access: 2019-05-08 18:24:52.342403239 +0530
Modify: 2019-05-08 18:24:52.342403239 +0530
Change: 2019-05-08 18:24:52.342403239 +0530
Birth: -
[root at hostname2]#
from server:
[root at hostname1]# ls /gluster/brick1
nfnas
[root at hostname1]# ls /gluster/brick1
brick1/ brick10/ brick11/
[root at hostname1]# ls /gluster/brick1/nfnas/
falloc-test IOs logs
[root at hostname1]# ls /gluster/brick1/nfnas/falloc-test/
test
[root at hostname1]# ls /gluster/brick1/nfnas/falloc-test/test
/gluster/brick1/nfnas/falloc-test/test
[root at hostname1]# du -sh /gluster/brick1/nfnas/falloc-test/test
473G /gluster/brick1/nfnas/falloc-test/test
[root at hostname1]# stat /gluster/brick1/nfnas/falloc-test/test
File: ‘/gluster/brick1/nfnas/falloc-test/test’
Size: 0 Blocks: 990030216 IO Block: 4096 regular empty file
Device: fd17h/64791d Inode: 1749722171 Links: 2
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Context: system_u:object_r:glusterd_brick_t:s0
Access: 2019-05-08 18:24:52.343774310 +0530
Modify: 2019-05-08 18:24:52.343774310 +0530
Change: 2019-05-08 18:24:52.366773892 +0530
Birth: -
[root at hostname1]# df -h /gluster/brick1/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/GLUSTER_vg1-GLUSTER_lv1 547G 541G 6.9G 99% /gluster/brick1
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list