[Gluster-devel] FOP write success when volume of glusterfs is full with write-behind is on

Lian, George (Nokia - CN/Hangzhou) george.lian at nokia.com
Fri Jan 13 08:18:38 UTC 2017


Hi,

I try write FOP test on case of volume of gusterfs is full, the detail process and some investigation is as the below:
------------------------------------------------------------------------------------------------------------------------------------------
1)  use dd to write full with volume export
dd if=/dev/zero of=/mnt/export/large.tar bs=10M count=100000

2) set write-behind option on
   # echo "asdf" > /mnt/export/test
   #
 No error prompt here, and try cat the file with following information.
   # cat /mnt/export/test
   cat: /mnt/export/test: No such file or directory

   # strace echo "asdf" > /mnt/export/test 
        write(1, "asdf\n", 5)                   = 5
        close(1)                                = -1 ENOSPC (No space left on device)

3) set write-behind option off
   # echo "asdf" > /mnt/export/test
   #-bash: echo: write error: No space left on device
 Have error prompt here.
   # cat /mnt/export/test
   cat: /mnt/export/test: No such file or directory

   # strace echo "asdf" > /mnt/export/test 
        write(1, "asdf\n", 5)                   = -1 ENOSPC (No space left on device)
        close(1)                                = 0
-------------------------------------------------------------------------------------------------------------------------
In my view , the action of FOP write is right to application.
But when the write-behind option is set on, the write FOP return success but it can't really write to gluster volume.
It will let application confuse, and it will lead to more application issue.

Although the close will return error, but as you know, more application will not do close FOP until the application exit,
In this case, write FOP show success, but when another thread want to read it, but it can't read anything.

Do you think it is an issue?
If not, do you have any comments for this inconvenient?
 

Best Regards,
George


More information about the Gluster-devel mailing list