[Gluster-devel] RFC on fix to bug #802414

Anand Avati aavati at redhat.com
Thu Jun 7 03:08:55 UTC 2012

On 06/05/2012 04:53 AM, Raghavendra Gowdappa wrote:

>> Should we make this migration "on demand" (the way inode migration
>> happen) or can we retain current approach of migrating all opened
>> fds en-mass and trying on-demand migration in fuse_resolve_fd only
>> those fds on which migration was never attempted
>> (7503c63ee141931556cf066b)?

"on demand" migration goes in the opposite direction of where we want to 
go w.r.t pro-active graph cleanup. We really want to make sure we get 
the handle established on the new graph before "giving up" the old one.

> on a related note, if we are creating a new fd, we would be loosing all
 > context in old-fd, so that automagic lock-migration (to new graph) in
 > protocol/client won't happen. We should be migrating fd-contexts 
 > If so, we need to discuss specifics of the same.

The lock migration would have been an issue with the design we had for 
it initially. The latest implementation of lock accounting abstracts it 
pretty nicely. All we need to do is make sure the new fd performs:

new_fd->lk_ctx = fd_lk_ctx_ref(old_fd);

This needs to be done right after fd_create() as we need the above 
pointer to be set before client_open_cbk().


More information about the Gluster-devel mailing list