[Bugs] [Bug 1773476] gluster does not return correct filesize and blocksize after ftruncate

bugzilla at redhat.com bugzilla at redhat.com
Mon Nov 18 08:55:11 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1773476



--- Comment #1 from zhou lin <zz.sh.cynthia at gmail.com> ---
During my recent test of glusterfs7 I find a very confusing issue.it is a
permanent issue that with enclosed test.c test crash each time(not glusterfs
but the test app).
Also, ftruncate the fstat does not return correct file size and block size and
I think this is a bug of glusterfs7, when I use glusterfs 3.12 there is no such
issue.

Test steps
Pre-codition: set volume cluster.consistent-metadata: on
Just execute test.c choose a glusterfs folder file to be the argument
Test.c logic:
1>      create a file with fopen
2>      fstat the file
3>      ftruncate the file
4>      mmap the file
5>      write to file - crash
# ./test /mnt/export/testrty
ID of containing device:  [0,2a]
File type:                regular file
I-node number:            -5612996300533715526
Mode:                     100644 (octal)
Link count:               1
Ownership:                UID=0   GID=615
Preferred I/O block size: 131072 bytes
File size:                0 bytes
Blocks allocated:         0
Last status change:       Mon Nov 18 09:53:40 2019
Last file access:         Mon Nov 18 09:53:40 2019
Last file modification:   Mon Nov 18 09:53:40 2019
ftruncate is called, size= 52

File block after truncate: 0
File size after truncate: 0
Bus error (core dumped)#   = crash happen in snprintf((char *)m_cells,
sizeof(struct DataStorageCell),
            "persistent data %ui", (unsigned int)sizeof(struct
DataStorageCell));


[New LWP 7198]
Core was generated by `./a.out /mnt/log/test'.
Program terminated with signal SIGBUS, Bus error.
#0  0x00007f66829c4c5d in vsnprintf () from /lib64/libc.so.6
Missing separate debuginfos, use: dnf debuginfo-install
glibc-2.28-40.wf30.x86_64
(gdb) bt
#0  0x00007f66829c4c5d in vsnprintf () from /lib64/libc.so.6
#1  0x00007f66829a4263 in snprintf () from /lib64/libc.so.6
#2  0x00000000004015f6 in main ()
(gdb) quit
[root at mn-0:/var/lib/systemd/coredump]
# cd /home/robot
[root at mn-0:/home/robot]



When I set cluster.consistent-metadata to “off”, this crash issue disappear,
and the filesize and blocks is correct!!
[root at mn-0:/home/robot]
# ./test /mnt/export/testrtyer
ID of containing device:  [0,2a]
File type:                regular file
I-node number:            -5858193331384707222
Mode:                     100644 (octal)
Link count:               1
Ownership:                UID=0   GID=615
Preferred I/O block size: 131072 bytes
File size:                0 bytes
Blocks allocated:         0
Last status change:       Mon Nov 18 09:55:57 2019
Last file access:         Mon Nov 18 09:55:57 2019
Last file modification:   Mon Nov 18 09:55:57 2019
ftruncate is called, size= 52

File block after truncate: 1
File size after truncate: 52
[root at mn-0:/home/robot]




After I study the code I think the difference is when consistent metadata
option is on
__afr_inode_write_cbk will clean the prebuf and postbuf, this cause the fuse
get all-zero prebuf and postbuf , so fstat get 0 filesize and blocks, and cause
app crashed in snprintf .
This issue is permanent in my test env(glusterfs7.0 community code), could you
please try this test.c I think this is a bug, in glusterfs3.12 version there is
no such issue.

Some config in my env:
[root at mn-0:/home/robot]
# gluster v info export

Volume Name: export
Type: Replicate
Volume ID: 69b29303-36e0-4146-b4d5-adf0dcbe3a48
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: mn-0.local:/mnt/bricks/export/brick
Brick2: mn-1.local:/mnt/bricks/export/brick
Brick3: dbm-0.local:/mnt/bricks/export/brick
Options Reconfigured:
cluster.entry-self-heal: on
cluster.data-self-heal: on
cluster.metadata-self-heal: on
performance.client-io-threads: off
cluster.heal-timeout: 60
cluster.favorite-child-policy: mtime
network.ping-timeout: 42
server.allow-insecure: on
cluster.consistent-metadata: on
cluster.quorum-reads: true
cluster.quorum-type: auto
cluster.server-quorum-type: none
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
diagnostics.client-log-level: INFO
cluster.server-quorum-ratio: 51%

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list