[Gluster-devel] glfs_resolve new file force lookup
Rudra Siva
rudrasiva11 at gmail.com
Sat Nov 22 03:49:31 UTC 2014
example resolving 10000 files with the fix -
Invocation:
ret = glfs_resolve (fs, subvol, objects[i], &loc, NULL, reval);
object [9998] = ./file_9998.txt, loc.inode (0xf11d7c), inode table
(0x7f62e0029ac0), name (meta-autoload/inode)
object [9999] = ./file_9999.txt, loc.inode (0xf11fcc), inode table
(0x7f62e0029ac0), name (meta-autoload/inode)
[the inode for each file is different, the inode table is the same].
lookups - total 3 (spurious) - the packets captured by wireshark
lookup are identical and as follows:
Lookup, gfid : 1, parent-gfid : 0, flags 0, o_rdonly
Lookup Reply, Return Value : 0, errno : 22 (invalid argument)
Without fix/invoking with attribute flag:
ret = glfs_resolve (fs, subvol, objects[i], &loc, &iatt, reval);
- this leads to 1 lookup over the network for each object/file - in my
current case the object does not exists - even if the file did exist,
the atomic write or append that I'm trying to test would manipulate it
correctly on the server - I do have lookup-unhashed off at this time
in tests.
On Fri, Nov 21, 2014 at 10:14 PM, Rudra Siva <rudrasiva11 at gmail.com> wrote:
> Thanks for the response. In my case, I am trying to avoid doing the
> network level lookup - since I use the same resolve only pass a null
> for the attribute structure - essentially in my case, it is an atomic
> multiple object read/write so I only want to resolve to the specific
> bricks and then dispatch the requests in a single rpc.
>
> Once the files can be resolved, a single read/write is mostly sent to
> the back-end. Most places seem to be calling with the attribute
> structure which translates into a force lookup, is it okay to move the
> force out of the if-block so it can apply to files that we are not
> interested in? That way it will help avoid doing the lookups for
> multiple files.
>
> I was curious to know how the inode, table maps to the brick -
> presently I have 1 brick but planning to play with a couple of bricks
> - the returned loc has different inodes for each file, and inode table
> is common - I do see one spurious lookup over the network (eg. I used
> lookup for 100 files, there was only 1 lookup generated over the
> network) - with the force, it becomes 100 lookups which simply return
> that the file does not exist.
>
> --Siva
>
> On Fri, Nov 21, 2014 at 1:21 PM, RAGHAVENDRA TALUR
> <raghavendra.talur at gmail.com> wrote:
>> Hi Rudra,
>>
>> The inode and inode table data structures here represent the in-memory inode
>> on the client side.(gfapi)
>>
>> When we are trying to create a new file, it becomes
>> *necessary* that we confirm with the backend if it can be created, hence
>> a force lookup.
>>
>> Only case where we avoid a force lookup is when in-memory inode is
>> present(means we had resolved this recently and have all the stat data)
>> and it is not the last component in the path. Say "b" in the path "/a/b/c".
>>
>> Please do tell if that does not clarify your question.
>>
>> Raghavendra Talur
>>
>>
>> On Fri, Nov 21, 2014 at 5:50 PM, Rudra Siva <rudrasiva11 at gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> A new file create does not honour the force lookup avoidance - in my
>>> case I am not interested in the attributes or forcing a lookup, just
>>> need the inode details - is there a specific reason why !force_lookup
>>> is not outside the block?
>>>
>>> https://github.com/gluster/glusterfs/blob/master/api/src/glfs-resolve.c
>>>
>>> 270: if (!force_lookup) {
>>>
>>> suggested fix:
>>>
>>> move this to outside of if-else to line 296 - so it applies to
>>> existing as well as new files.
>>>
>>> Can someone explain what do the inode and inode table data structures
>>> represent?
>>> --
>>> -Siva
>>> _______________________________________________
>>> Gluster-devel mailing list
>>> Gluster-devel at gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>>
>>
>> --
>> Raghavendra Talur
>>
--
-Siva
More information about the Gluster-devel
mailing list