[Gluster-users] Coherency problem with file buffer cache

Brian Foster bfoster at redhat.com
Tue Jun 12 14:11:03 UTC 2012


On 06/12/2012 09:35 AM, Brian Candler wrote:
> On Tue, Jun 12, 2012 at 05:52:37PM +0530, Sabyasachi Ruj wrote:
>> This will update the content of the file samplefile with content 'B'.
>> You will be able to see that client1 will still show 'A'. This will
>> not happen when you update the file from the same client where verify
>> is running, that is client1.
>>
>> I know that direct I/O mode can help to certain extent. But it does
>> not guarantee a atomic transaction either.
> 
> I can confirm (with 3.3.0) the behaviour seen.
> 
> Running strace on the verify process shows it doing read(3) = 2048 every
> time; however an strace on the glusterfs (FUSE) process shows only a single
> 2048+ byte transfer the first time.
> 
> writev(7, [{"\20\10\0\0\0\0\0\0\367\1\0\0\0\0\0\0", 16}, {"DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD"..., 2048}], 2) = 2064
> 
> However, closing and opening the file again makes it see the new contents:
> 

Yeah, by default fuse purges the page cache on every file open (so if
you ran a 'cat /mnt/file' in parallel to your verify tool, it should
also cause your verification thread to pick up the new data).

I've been looking into what it would take to actually not purge the
cache on every open like this and found at least one limitation in the
fuse validation mechanism. Taking a quick look at the read path, it
looks like an invalidation of the cached data would only occur if your
thread went and tried to read beyond EOF. This is probably worth filing
a bug against, even if it is something that might be fixed in fuse...

Brian

> --- gtest.c.orig	2012-06-12 14:23:04.399365721 +0100
> +++ gtest.c	2012-06-12 14:31:37.811383033 +0100
> @@ -134,6 +134,8 @@
>    if (strcmp(argv[2], "verify") == 0) {
>      int i = 0;
>      while(continue_loop) {
> +        close(fd);
> +        fd = open_file(argv[1], &is_new_file);
>  	lockFile(fd);
>  	page_read(fd, buffer_r);
>  	unlockFile(fd);
> 
> Regards,
> 
> Brian.
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list