[Bugs] [Bug 1433460] Extra line of ^@^@^@ at the end of the file

bugzilla at redhat.com bugzilla at redhat.com
Mon Apr 3 22:30:15 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1433460

patrice.linel at genusplc.com changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|needinfo?(patrice.linel at gen |
                   |usplc.com)                  |



--- Comment #2 from patrice.linel at genusplc.com ---

Ok, I did some more digging, It took a while to isolate the bug.
> 
> Could you explain a little bit more about the application?
> - How is it writing files?
> - Can you reproduce this on other volume types? (Try a single-brick volume
> too)
> - How are you capturing the strace?
With this simplified program and the -o, the strace does not show anything
particular anymore. Let me know if you still want to have them.

> - Can you provide a shell script or simple application that reproduces this?

Yes, the application is a fortran program that writes matrices to files.
the following code produce the bug:

program foo
implicit none
double precision,dimension(:,:),allocatable :: r
character(30) :: ifout
integer :: i,j
allocate(r(1000,10000))
call random_number(r)
r=r*10
call get_command_argument(1,ifout)

open(10,file=ifout)
do j=1,10000
write(10,'(1000(i1,1X))') int(r(:,j))
end do

close(10)
deallocate(r)
end program foo

compiled with gfortran(6.1.0) f.f90 and execute like ./a.out t.dat 



> - What is the exact glusterfs version?
3.10

> - Does this only happen over NFS, FUSE or other types of mounting?
fuse and ganesha, we add some instability with the fuse so now we are using
ganesha v4.0 nfs.
> - What options are set for the volume (output of 'gluster volume info
> <VOL>')?
gluster volume info wau_inbox

Volume Name: wau_inbox
Type: Distribute
Volume ID: 460eb44f-aecb-426b-bd53-16e72522a422
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gfs1:/glusterfs/vol1/brick1
Brick2: gfs2:/glusterfs/vol1/brick1
Options Reconfigured:
features.scrub-freq: weekly
features.scrub-throttle: normal
features.scrub: Active
features.bitrot: on
performance.cache-size: 1073741824
performance.io-thread-count: 4
user.smb: disable
user.cifs: disable
features.shard-block-size: 128MB
features.shard: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
features.cache-invalidation: on
performance.flush-behind: off
nfs-ganesha: enable


We have another similar volume without the sharding , and I can not reproduce
the bug on it. 

below you can see the problem , the correct file are sized 20000000. 


-rw-rw-r-- 1 plinel plinel  20971520 Apr  3 17:12 132461.txt
-rw-rw-r-- 1 plinel plinel  21048576 Apr  3 17:12 132462.txt
-rw-rw-r-- 1 plinel plinel  20000000 Apr  3 17:12 132463.txt
-rw-rw-r-- 1 plinel plinel  20000000 Apr  3 17:12 132464.txt
-rw-rw-r-- 1 plinel plinel  20000000 Apr  3 17:12 132465.txt
-rw-rw-r-- 1 plinel plinel  20000000 Apr  3 17:12 132466.txt
-rw-rw-r-- 1 plinel plinel  22097152 Apr  3 17:12 132467.txt
-rw-rw-r-- 1 plinel plinel  20000000 Apr  3 17:12 132468.txt
-rw-rw-r-- 1 plinel plinel  20000000 Apr  3 17:12 132469.txt
-rw-rw-r-- 1 plinel plinel  20000000 Apr  3 17:12 132470.txt
-rw-rw-r-- 1 plinel plinel  20000000 Apr  3 17:12 132471.txt
-rw-rw-r-- 1 plinel plinel  21048576 Apr  3 17:12 132472.txt
-rw-rw-r-- 1 plinel plinel  21048576 Apr  3 17:12 132473.txt
-rw-rw-r-- 1 plinel plinel  21048576 Apr  3 17:12 132474.txt
-rw-rw-r-- 1 plinel plinel  21048576 Apr  3 17:12 132475.txt
-rw-rw-r-- 1 plinel plinel  20000000 Apr  3 17:12 132476.txt
-rw-rw-r-- 1 plinel plinel  21048576 Apr  3 17:12 132477.txt
-rw-rw-r-- 1 plinel plinel  20000000 Apr  3 17:12 132478.txt

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list