[Gluster-devel] 3.7.9 update

Raghavendra Gowdappa rgowdapp at redhat.com
Fri Mar 18 02:18:38 UTC 2016


[1] changes the retry behavior of write-behind in case of flush failures. Do you think it needs to be called out in release notes?

[1] http://review.gluster.org/12594

regards,
Raghavendra

----- Original Message -----
> From: "Vijay Bellur" <vbellur at redhat.com>
> To: "Gluster Devel" <gluster-devel at gluster.org>, "Niels de Vos" <ndevos at redhat.com>, "Raghavendra Bhat"
> <rabhat at redhat.com>, "Dan Lambright" <dlambrig at redhat.com>, "Nithya Balachandran" <nbalacha at redhat.com>
> Sent: Friday, March 18, 2016 6:52:25 AM
> Subject: Re: [Gluster-devel] 3.7.9 update
> 
> A quick update - 3.7.9 has been tagged in the repository. Will send out
> an announcement once the packages and release notes are ready.
> 
> Thanks,
> Vijay
> 
> 
> On 03/13/2016 01:20 PM, Vijay Bellur wrote:
> > Hey All,
> >
> > I have been running tests with the latest HEAD of release-3.7  on a 2x2
> > distributed replicated volume. Here are some updates:
> >
> > - Write Performance has seen an improvement as seen by running
> > perf-test.sh [1]
> >
> >
> > v3.7.9 with FUSE client
> >
> > Testname                Time
> > emptyfiles_create       961.83
> > emptyfiles_delete       600.08
> > smallfiles_create       1508.38
> > smallfiles_rewrite      1325.60
> > smallfiles_read         598.50
> > smallfiles_reread       384.65
> > smallfiles_delete       623.66
> > largefile_create        18.33
> > largefile_rewrite       19.17
> > largefile_read          11.44
> > largefile_reread        0.31
> > largefile_delete        0.66
> > directory_crawl_create  981.21
> > directory_crawl         30.64
> > directory_recrawl       28.01
> > metadata_modify         1117.92
> > directory_crawl_delete  423.08
> >
> > v3.7.8 with FUSE client
> >
> > Testname                Time
> > emptyfiles_create       953.87
> > emptyfiles_delete       577.46
> > smallfiles_create       1837.33
> > smallfiles_rewrite      2349.37
> > smallfiles_read         604.22
> > smallfiles_reread       394.48
> > smallfiles_delete       629.74
> > largefile_create        73.86
> > largefile_rewrite       76.23
> > largefile_read          11.36
> > largefile_reread        0.31
> > largefile_delete        0.65
> > directory_crawl_create  985.16
> > directory_crawl         31.10
> > directory_recrawl       26.94
> > metadata_modify         1422.60
> > directory_crawl_delete  382.57
> >
> > Hopefully this addresses the write performance drop we observed with 3.7.8.
> >
> > - Regular file system test tools like iozone, dbench etc. are running
> > fine with the fuse client.
> >
> > - Rolling upgrade from 3.7.8 to the latest release-3.7 HEAD worked fine
> > with I/O happening from a fuse client.
> >
> > - There is a memory leak in FUSE client that I observed while running
> > perf-test.sh. A statedump revealed that there was a ref leak on several
> > inodes. I have sent a possible patch [2] which addressed problems in my
> > test setup. This does need careful review and more testing. Given the
> > memory leaks we have been observing with fuse, I feel that it would be
> > good to review mount/fuse for possible leaks and run more tests before
> > releasing 3.7.9. I am looking at pushing out tagging by 2-3 days to mid
> > week to accomplish this. Niels, Raghavendra - can you provide additional
> > help with reviewing here?
> >
> > - Tiering has seen a lot of patches in 3.7.9. Dan, Nithya - can you
> > please assist in preparation of release notes by summarizing the changes
> > and providing inputs on the general readiness of tiering?
> >
> > Thanks,
> > Vijay
> >
> > [1] https://github.com/avati/perf-test/blob/master/perf-test.sh
> >
> > [2] http://review.gluster.org/#/c/13689/
> >
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 


More information about the Gluster-devel mailing list