[Gluster-devel] RE: LOOKUP conflict => OPEN failS_

Fredrik Widlund fredrik.widlund at qbrick.com
Mon Feb 8 17:29:23 UTC 2010


Hi,

Ok, it seems to be solved for now. The writer was a pure-ftpd server, and the "-O, atomic replace" flag caused the behavior. I browsed through the code briefly and it uses among other things hard-link schemes to do atomic changes.

Kind regards,
Fredrik Widlund

From: gluster-devel-bounces+fredrik.widlund=qbrick.com at nongnu.org [mailto:gluster-devel-bounces+fredrik.widlund=qbrick.com at nongnu.org] On Behalf Of Fredrik Widlund
Sent: den 8 februari 2010 16:57
To: gluster-devel at nongnu.org
Subject: [Gluster-devel] RE: LOOKUP conflict => OPEN fails_


It's getting worse and worse. Upgraded to 3.0.2 but to no avail.

The prog_index.m3u8 files are being rewritten every 10 seconds. Every other read of a newly written index-file results in -1 and the file not being available, possibly until the next update of the file.

The strange thing is that until a few days ago this problem wasn't noticeable at all, and now is huge. The only difference is the quickly growing number of files on the filesystem, now around 190k files.

Kind regards,
Fredrik Widlund

From: gluster-devel-bounces+fredrik.widlund=qbrick.com at nongnu.org [mailto:gluster-devel-bounces+fredrik.widlund=qbrick.com at nongnu.org] On Behalf Of Fredrik Widlund
Sent: den 8 februari 2010 15:02
To: gluster-devel at nongnu.org
Subject: [Gluster-devel] LOOKUP conflict => OPEN fails_


Hi,

I'm running a simple AFR setup, thouch currently with only one backend, and 2 tcp clients. Version is 3.0.0 from jan 20.

Basically one client is writing a large number of files, continuously, and the other client is reading.

I have a growing problem with lookup "conflicts", resulting in files being listed in directories but where reads are returning "-1 (No such file...".

Restarting the client does not solve the conflict, but restarting the server does and the files becomes available again.

The filesystem is a 5TB XFS hw raid-5 with around 150k files.

Debug trace of client:
[2010-02-08 13:39:29] N [trace.c:148:trace_open_cbk] replicated: 3073: (op_ret=0, op_errno=117, *fd=0x129a430)
[2010-02-08 13:39:37] N [trace.c:1837:trace_open] replicated: 3094: (loc {path=/download/90910/live/webb1/webb1/Layer3/prog_index.m3u8, ino=5042185}, flags=32768, fd=0x1296fc0, wbflags=0)
[2010-02-08 13:39:37] N [trace.c:148:trace_open_cbk] replicated: 3094: (op_ret=-1, op_errno=2, *fd=0x1296fc0)
[2010-02-08 13:39:37] W [fuse-bridge.c:858:fuse_fd_cbk] glusterfs-fuse: 3094: OPEN() /download/90910/live/webb1/webb1/Layer3/prog_index.m3u8 => -1 (No such file or directory)
[2010-02-08 13:39:38] N [trace.c:1837:trace_open] replicated: 3100: (loc {path=/download/90910/live/webb1/webb1/Layer4/prog_index.m3u8, ino=5013773}, flags=32768, fd=0x1296fc0, wbflags=0)
[2010-02-08 13:39:38] N [trace.c:148:trace_open_cbk] replicated: 3100: (op_ret=0, op_errno=117, *fd=0x1296fc0)
[2010-02-08 13:39:38] N [trace.c:1837:trace_open] replicated: 3106: (loc {path=/download/90910/live/webb1/webb1/Layer4/Period1/segment277.ts, ino=5050371}, flags=32768, fd=0x129a430, wbflags=0)
[...]

And server:
[2010-02-08 13:39:09] D [dict.c:303:dict_get] dict: @this=(nil) @key=0x7fedee4e43f3
[2010-02-08 13:39:09] D [dict.c:303:dict_get] dict: @this=(nil) @key=0x7fedee4e440b
[2010-02-08 13:39:17] D [server-protocol.c:2037:server_open_cbk] server: 1719: OPEN (null) (0) ==> -1 (No such file or directory)
[2010-02-08 13:39:18] D [server-protocol.c:2037:server_open_cbk] server: 1724: OPEN (null) (0) ==> -1 (No such file or directory)
[2010-02-08 13:39:28] D [server-resolve.c:238:resolve_path_deep] store0: RESOLVE OPEN() seeking deep resolution of /download/90910/live/webb1/webb1/Layer3/prog_index.m3u8
[2010-02-08 13:39:28] D [dict.c:303:dict_get] dict: @this=(nil) @key=0x7fedee4e43db
[2010-02-08 13:39:28] D [dict.c:303:dict_get] dict: @this=(nil) @key=0x7fedee4e43f3
[2010-02-08 13:39:28] D [dict.c:303:dict_get] dict: @this=(nil) @key=0x7fedee4e440b
[2010-02-08 13:39:28] D [dict.c:303:dict_get] dict: @this=(nil) @key=0x7fedee4e43db
[...]

Kind regards,
Fredrik Widlund


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20100208/5004482c/attachment-0003.html>


More information about the Gluster-devel mailing list