[Gluster-users] Revisit: FORTRAN Codes and File I/O
Jeff Darcy
jdarcy at redhat.com
Thu Jul 1 01:17:59 UTC 2010
On 06/30/2010 03:33 PM, Brian Smith wrote:
> Spoke too soon. Same problem occurs minus all performance translators.
> Debug logs on the server show
>
> [2010-06-30 15:30:54] D [server-protocol.c:2104:server_create_cbk]
> server-tcp: create(/b/brs/Si/CHGCAR) inode (ptr=0x2aaab00e05b0,
> ino=2159011921, gen=5488651098262601749) found conflict
> (ptr=0x2aaab40cca00, ino=2159011921, gen=5488651098262601749)
> [2010-06-30 15:30:54] D [server-resolve.c:386:resolve_entry_simple]
> server-tcp: inode (pointer: 0x2aaab40cca00 ino:2159011921) found for
> path (/b/brs/Si/CHGCAR) while type is RESOLVE_NOT
> [2010-06-30 15:30:54] D [server-protocol.c:2132:server_create_cbk]
> server-tcp: 72: CREATE (null) (0) ==> -1 (File exists)
>
The first line almost looks like a create attempt for a file that
already exists at the server. The second and third lines look like *yet
another* create attempt, failing this time before the request is even
passed to the next translator. This might be a good time to drag out
the debug/trace translator, and sit it on top of brick1 to watch the
create calls. That will help nail down the exact sequence of events as
the server sees them, so we don't go looking in the wrong places. It
might even be useful to do the same on the client side, but perhaps not
yet. Instructions are here:
http://www.gluster.com/community/documentation/index.php/Translators/debug/trace
In the mean time, to further identity which code paths are most likely
to be relevant, it would be helpful to know a couple more things.
(1) Is each storage/posix volume using just one local filesystem, or is
it possible that the underlying directory tree spans more than one?
This could lead to inode-number duplication, which requires extra handling.
(2) Are either of the server-side volumes close to being full? This
could result in creating an extra "linkfile" on the subvolume/server
where we'd normally create the file, pointing to where we really created
it due to space considerations.
More information about the Gluster-users
mailing list