[Gluster-users] gfapi async write dropping requests
Ramachandra Reddy Ankireddypalle
rcreddy.ankireddypalle at gmail.com
Wed Nov 11 17:12:42 UTC 2015
I tried disabling write behind for the volume. Still I am facing the same
issue. The return code in callback reports that all the writes in fact have
succeeded.
Volume Name: dispersevol
Type: Distributed-Disperse
Volume ID: 5ae61550-51b1-4e72-875e-2e0f1f206882
Status: Started
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: pbbaglusterfs1:/ws/disk/ws_brick
Brick2: pbbaglusterfs2:/ws/disk/ws_brick
Brick3: pbbaglusterfs3:/ws/disk/ws_brick
Brick4: pbbaglusterfs1:/ws/disk2/brick
Brick5: pbbaglusterfs2:/ws/disk2/brick
Brick6: pbbaglusterfs3:/ws/disk2/brick
Options Reconfigured:
performance.write-behind: off
performance.io-thread-count: 32
On Wed, Nov 11, 2015 at 11:58 AM, Vijay Bellur <vbellur at redhat.com> wrote:
> On Wednesday 11 November 2015 10:22 PM, Ramachandra Reddy Ankireddypalle
> wrote:
>
>> Hi,
>> I am trying to write data using libgfapi async write. The write
>> returns successful and call back is also getting invoked. But some of
>> the data is not making it to gluster volume. If I put a sleep in the
>> code after each write then all the writes are making it to the gluster
>> volume. This makes me feel that libgfapi is dropping some of the requests.
>>
>>
> This could be related to write-behind in gluster. If you disable
> write-behind for the volume, do you observe similar results?
>
> Regards,
> Vijay
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151111/f3041671/attachment.html>
More information about the Gluster-users
mailing list